Thursday, November 28, 2013

Bookmark and Share

Note: This article originally appeared on mapstruct.org. In order to spread the word on MapStruct I thought it's a good idea to post it in this blog, too.

It is my great pleasure to announce the release of MapStruct 1.0.0.Alpha2.

This took us a bit longer than expected, but the new release offers quite a few exciting new features we hope you'll enjoy. The JARs have already been synched to Maven Central. The coordinates are:

Alternatively you can get a distribution bundle from SourceForge.

Beside several new mapping features (e.g. combining several sources into one target object) the release provides integration with CDI and Spring to make the retrieval of mapper objects more comfortable. We've added several new implicit data type conversions and there is now also support for converting Map objects.

Let's have a closer look at some of the additions.

Advanced mapping features

When working with data transfer objects (DTO) to pass data from the backend to the client, it is common to have one DTO which transports the data from several entities. For this purpose MapStruct supports now mapping methods with several source methods. The following shows an example:

1
2
3
4
5
6
7
8
9
@Mapper
public interface OrderMapper {

    @Mappings({
        @Mapping(source = "order.name", target = "name"),
        @Mapping(source = "houseNo", target = "houseNumber")
    })
    OrderDto orderAndAddressToOrderDto(Order order, Address deliveryAddress);
}

As for single parameter methods all attributes are mapped by name from the source objects to the target object, performing a type conversion if required. In case a property with the same name exists in more than one source object, the source parameter from which to retrieve the property must be specified using the @Mapping annotation as shown for the name property.

One of the core principles in MapStruct is type-safety. Therefore an error will be raised at generation time when such an ambiguity is not resolved. Note that when mapping a property which only exists once in the source objects to another target property name, it is optional to specify the source parameter's name.

Also related to type-safety and correctness of generated mappings is the new option to raise an error in case an attribute of the mapping target is not populated, as this typically indicates an oversight or configuration error. By default a compiler warning is created in this case. To change this e.g. into a compile error instead, the unmappedTargetPolicy property can be used like this:

1
2
3
4
5
@Mapper(unmappedTargetPolicy=ERROR)
public interface OrderMapper {

    //...
}

In some cases it is required to update an existing object with the properties from a given source object instead of instantiating a new target object. This use case can be addressed with help of the @MappingTarget annotation which denotes one method parameter as the target of the mapping like this:

1
2
3
4
5
@Mapper
public interface OrderMapper {

    void updateOrderEntityFromDto(OrderDto dto, @MappingTarget Order order);
}

Instead of instantiating a new Order object, the generated implementation of updateOrderEntityFromDto() method will update the given order instance with the attributes from the passed OrderDto.

More implicit type conversions

Several new implicit type conversions have been added for the case that the source and target type of a mapped property differ. BigDecimal and BigInteger can now automatically be converted into other numeric types as well as into String. You can finde a list of all supported conversions in the reference documentation.

Please beware of a possible value or precision loss when performing such conversions from larger to smaller numeric types. It is planned for the next milestone to optionally raise a warning in this case.

It is now also possible to convert Date into String and vice versa. For that purpose a new parameter has been added to the @Mapping annotation which allows to specify a format string as interpreted by SimpleDateFormat:

1
2
3
4
5
6
@Mapper
public interface OrderMapper {

    @Mapping(source="orderDate", dateFormat="dd.MM.yyyy HH:mm:ss")
    OrderDto orderToOrderDto(Order order);
}

Integration with CDI and Spring

The recommended way for obtaining mapper instances in the 1.0.0.Alpha1 release was to use the Mappers factory.

Alternatively it is now also possible to work with dependency injection. To make this possible, MapStruct can generate mappers which are CDI or Spring beans, based on which flavor of DI you prefer. In the following example MapStruct is adviced to make the generated mapper implementation a CDI bean by specifying "cdi" via the componentModel attribute in the @Mapper annotation:

1
2
3
4
5
@Mapper(componentModel="cdi")
public interface OrderMapper {

    //...
}

This allows to obtain an order mapper simply via @Inject (provided you have CDI enabled within your application):

1
2
@Inject
private OrderMapper orderMapper;

Note that all other mappers a generated mapper class references are also obtained via the configured component model. So if you e.g. hook in hand-written mapper classes via @Mapper#uses() make sure that these mappers are compliant with the chosen component model, e.g. are CDI beans themselves. Refer to the documentation which describes all the specifics in detail.

On a related note, if you prefer to work with the Mappers factory as before, you'll have to adapt your imports because this class has been moved to the new package org.mapstruct.factory.

Further info

This concludes our tour through the new features in MapStruct 1.0.0.Alpha2. You can find the complete list of addressed issues in the change log on GitHub. The reference documentation has been updated to cover all new functionality.

If you have any kind of feedback please make sure to let us know. Either post a comment here or open a discussion in the mapstruct-users group. Bugs and feature requests can be reported in the issue tracker and your pull request on GitHub is highly welcome! The contribution guide has all the info you need to get started with hacking on MapStruct.

Many thanks to Andreas Gudian and Lukasz Kryger who contributed to this release, that's awesome!

Monday, June 3, 2013

Bookmark and Share

It is a great pleasure for me to announce the first release of the MapStruct project.

MapStruct is a code generator which simplifies the implementation of mappings between Java bean types by generating mapping code at compile time, following a convention-over-configuration approach. Unlike reflection-based mapping frameworks MapStruct generates mapping code at build time which is fast, type-safe and easy to understand.

The official release announcement describes in detail what MapStruct can do for you and what's on the roadmap for the next releases. The release is available on SourceForge and Maven Central.

Give it a try, your feedback is highly welcome via the mapstruct-users Google group!

Wednesday, March 6, 2013

Bookmark and Share

Every once in a while it happens to me that TestNG tests fail to launch in Eclipse when starting in debug mode (run mode works fine). The progress bar freezes at 57% and Eclipse indefinitely hangs.

Some googling brought me to to this bug in the TestNG issue tracker. As it seems, the issue is caused by a conflict between the TestNG version used as (Maven) dependency in the project under test and the TestNG version used by the TestNG Eclipse plug-in. People commented at the issue that it helped for them to create a new Eclipse workspace. As I wanted to avoid that, I had a look into the .metadata folder of my workspace, searching for things which might be the potential cause.

And indeed I stumbled upon the folder WORKSPACE_DIR/.metadata/.plugins/org.testng.eclipse/. After deleting this folder and restarting Eclipse, TestNG tests would launch in debug mode again as expected.

Sunday, November 18, 2012

Bookmark and Share

Watching a talk from Square's CTO Bob Lee, I just learned about Dagger, a new dependency injection framework for Java and Android which is currently in the works at Square, Inc.

Considering the number of existing DI solutions in the Java space – e.g. CDI, Google Guice and Spring – one might wonder whether the world really needs yet another DI framework. According to Bob's talk, Dagger (a pun on "directed acyclic graph") is the attempt to create a modern and fast DI framework based on the insights gained during development and usage of Guice (Bob was the founder of the Guice project at Google). And indeed does Dagger come up with some quite interesting ideas which I'd like to discuss in more detail in the following.

Overview

Dagger is centered around the annotations for dependency injection defined by JSR 330 (which Bob Lee co-led). This is a good thing because it increases portability of your code between different DI solutions.

Dependencies are retrieved by annotating fields or constructors with @Inject:

1
2
3
4
5
6
7
8
9
10
11
public class Circus {

    private final Artist artist;

    @Inject    
    public Circus(Artist artist) {
        this.artist = artist;
    }

    //...
}

To satisfy dependencies, Dagger creates the required objects using their @Inject-annotated constructor (in turn creating and passing any dependencies) or the default no-args constructor.

Where that's not possible (e.g. when an implementation of an interface needs to be injected) provider methods can be used. Provider methods must be annotated with @Provides and be defined in a class annotated with @Module like this:

1
2
3
4
5
6
@Module
public class CircusModule {
    @Provides Artist provideArtist() {
        return new Juggler();
    }
}

The @Module annotation is also used to define the entry point of an application:

1
2
3
4
@Module( entryPoints=CircusApp.class )
public class CircusModule {
    //...
}

This entry point represents the root of the object graph managed by Dagger. As we'll see in a moment, explicitly defining the root allows for compile-time validation of the dependency graph. An instance of the entry point type can be retrieved from the ObjectGraph class, passing the module(s) to create the graph from:

1
2
3
ObjectGraph objectGraph = ObjectGraph.create(new CircusModule());
CircusApp circus = objectGraph.get(CircusApp.class);
circus.startPerformance();

Dagger also also provides support for qualifiers, lazy injection, injection of providers and more. The project's web site gives a good overview. Apart from that it's interesting to see what Dagger deliberately does not support to avoid an increased complexity:

  • Circular dependencies between objects
  • Method injection
  • Custom scopes (Objects are either newly created for each injection or singleton-scoped)

Code generation

DI frameworks usually make intensive use of reflection to examine annotations, find injection points, create managed objects etc. While reflection today isn't as expensive as it used to be in earlier years, it still can take a considerable amount of time to create large object graphs with lots of dependencies.

Dagger tries to improve upon that with the help of code generation. It provides a JSR 269 based annotation processor which is used at compile time to create an adapter class for each managed type. These adapter classes contain all the logic required at run time to set up the object graph by invoking constructors and populating references to other objects, without making use of reflection.

This approach promises performance benefits over reflection-based ways for creating object graphs typically used by DI frameworks. On my machine Dagger needed roughly half the time to initialize the graph of the CoffeeApp example using the generated classes compared to using reflection (which it also supports as fallback). Of course this is by no means a comprehensive benchmark and can't be compared with other frameworks but it surely shows the potential of the code generation approach.

The annotation processor also performs a validation of the object graph and its dependencies at compile time. So if for instance no matching type (or more than one) can be found for a given injection point, the build will fail with an error message describing the problem. This helps in reducing turn-around times compared to discovering this sort of error only at application start-up. Implementing this sort of checks using an annotation processor makes them available in IDEs (which typically can integrate annotation processors) as well as headless builds, e.g. on a CI server.

Object graph visualization

Not mentioned in the documentation, Dagger also provides an annotation processor which generates a GraphViz file visualizing the object graph. This may be useful to get an understanding of unknown object graphs. The following shows the graph from the CoffeeApp example:

Summary

Dagger is a new dependency injection framework for Java and Android.

While it's still in the works (the current version is 0.9 and there are still some apparent bugs), I find the concept of using an annotation processor for validating the object graph at compile time and generating code for a faster initialization at runtime very interesting. In particular on mobile devices fast start up times are essential for a good user experience.

I also like the idea of leaving out features which might provide some value but would add much complexity. One thing I'm missing though is some sort of interceptor or decorator mechanism. This would be helpful for implementing typical cross-cutting concerns.

It'll definitely be interesting to see how the code generation approach works out in practice and whether other DI solutions possibly adapt that idea.

Wednesday, August 29, 2012

Bookmark and Share

Note: This post originally appeared on beanvalidation.org. Please post any feedback over there.

Now that everybody is returning from their summer holidays, also the Bean Validation team is getting back to their desks in order to work with full steam towards revision 1.1.

As you know, the largest new feature will be method validation, that is the validation of method parameters and return values using constraint annotations. Bean Validation 1.1 early draft 1 lays the ground for this, and right now we're tackling some advanced questions still open in that area (btw. if you haven't yet tried out the reference implementation of ED1, this is the perfect time to do so and give us your feedback).

The problem

One question the EG currently is discussing is whether and, if so, how a refinement of method constraints should be allowed in sub-types. That is, if a class implements a method of an interface or overrides a method from a super class, should the sub-type be allowed to place any additional constraints?

The current draft defines the following rules for such cases (see the draft document for all the gory details):

  • No parameter constraints may be specified in addition to those constraints defined on the method in the interface or super class.
  • Return value constraints may be added in sub-types.

The rationale

The rationale behind this is the principle of behavioral sub-typing, which demands that wherever a given type T is used, it should be possible to replace T with a sub-type S of T. This means that a sub-type must not strengthen a method's preconditions (by adding parameter constraints), as this might cause client code working correctly against T to fail when working against S. A sub-type may also not weaken a method's postconditions. However, a sub-type may strengthen the method's postconditions (by adding return value constraints), as client code working against T still will work against S.

Can you show me some code, please?

To give you an example, the following shows a constraint declaration considered illegal as of the current draft, as parameter constraints are added to the placeOrder() method in a sub-class of OrderService:

1
2
3
4
5
6
7
8
9
10
11
12
public class OrderService {
    void placeOrder(@NotNull String customerCode, @NotNull Item item, int quantity) { ... }
}

public class SimpleOrderService extends OrderService {

    @Override
    public void placeOrder(
        @Size(min=3, max=20) String customerCode,
        Item item,
        @Min(1) int quantity) { ... }
}

Alternatives

While this approach works, follows principles of clean OO design and also is employed by other Programming by Contract solutions, some voices in the EG expressed doubts whether the handling of parameter constraints isn't too restrictive and thus may limit innovation in that area. In particular with respect to legacy code, the question was raised whether it shouldn't be allowed to add parameter constraints in sub-types.

One example may be a legacy interface, which technically has no constraints (that is, no parameter constraints are placed on its methods), but comes with a verbal description of preconditions in its documentation. In this case an implementor of that interface might wish to implement this contract by placing corresponding constraint annotations on the implementation.

An open question in this situation is what should the behavior be if the interface is being constrained afterwards?

Give use your feedback!

So what do you think, should such a refinement of parameter constraints be allowed or not? Possible alternatives:

  • allow such a refinement by default
  • have some sort of switch controlling the behavior (either standardized or provider-specific)

As there are pro's and con's of either approach, we'd very interested in user feedback on this.

Let us know what you think by posting a comment directly to this blog, shooting a message to the mailing list or participating in this Doodle vote. Which use cases you have encountered come to mind where the possibility to refine parameter constraints may help you?