Category Archives: web services

Custom bean validator with Spring framework and API/implementation barrier

I feel a big urge to decrease the amount of boilerplate code in the applications I am working on. Writing this code is not productive, not just because of the code itself, but also because of the testing for that code. The bean validation framework can help reduce such boilerplate code.
I created the following constraint to help in validation the role on our REST services:

/**
 * Annotation to automatically validate a security role for validity and possibly check the role.
 */
@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)
@Constraint(validatedBy = ValidTokenDelegate.class)
public @interface ValidToken {
 
    /**
     * Boodschap als de validatie fout.
     */
    String message() default "Token not valid.";
 
    /**
     * Groups for the constraint.
     */
    Class<?>[] groups() default { };
 
    /**
     * Payload.
     */
    Class<? extends Payload>[] payload() default { };
 
    /**
     * Required roles, one of them is sufficient.
     */
    String[] vereisteRollen() default { };
}

Note that while the documentation does not seem to indicate this, including the message, groups and payload fields in the annotation is mandatory. Luckily hibernate-validator gives clear indication about this.

The actual code which does the validation is quite easy. The actual validation itself is done using an injected spring service.

@Component
@Scope("prototype")
public class ValidTokenValidator implements ConstraintValidator<ValidToken, TokenTO> {
 
    private String[] vereisteRollen;
 
    @Autowired
    private RequestStateService requestStateService;
 
    @Override
    public void initialize(ValidToken constraintAnnotation) {
        vereisteRollen = constraintAnnotation.vereisteRollen();
    }
 
    @Override
    public boolean isValid(TokenTO token, ConstraintValidatorContext context) {
        try {
            requestStateService.validate(token, vereisteRollen);
            return true;
        } catch (AccessDeniedException se) {
            context.disableDefaultConstraintViolation();
            context.buildConstraintViolationWithTemplate(se.getMessage()).
                    addConstraintViolation();
            return false;
        } catch (ServiceException se) {
            context.disableDefaultConstraintViolation();
            context.buildConstraintViolationWithTemplate(se.getMessage()).
                    addConstraintViolation();
            return false;
        }
    }
}

Unfortunately, there are two problems to get this to work. The application is modular with a clear split between API – containing only interfaces and transfer objects – and implementation modules. The service actually checking the constraint only exists in the implementation module, but the constraint needs to be specified on interfaces in the API module. This means the ValidToken annotation also needs to be in the API module (correctly so). The @Constraint annotation makes the annotation a bean validation constraint, but it requires a reference to the class name, which breaks the module split.
Secondly we have a problem with the spring injection in the validator class. The system is wired together automatically, simply by including the dependency to hibernate-validator in the maven pom. It then requires just a few lines in the spring configuration:

<bean id="validator" class="org.springframework.validation.beanvalidation.LocalValidatorFactoryBean"/>
<bean class="org.springframework.validation.beanvalidation.MethodValidationPostProcessor"/>

Unfortunately though, it seems both Hibernate-validator and Spring try to define the validator factory, and I found no way to assure the spring one is defined last (though I basically need both anyway).

As a solution to both problems, I introduced a delegate for the validator.

@Component // @todo component not needed, used to force autowiring as LocalValidatorFactoryBean is not used
public class ValidTokenDelegate implements ConstraintValidator<ValidToken, TokenTO> {
 
    private static final Logger LOG = LoggerFactory.getLogger(ValidTokenDelegate.class);
 
    private static ApplicationContext applicationContext;
 
    private ConstraintValidator constraintValidator;
 
 
    /**
     * Assure application context is set.
     * @todo dirty, automatic injection of autowired stuff is not working
     *
     * @param applicationContext application context
     */
    @Autowired
    public void setApplicationContext(ApplicationContext applicationContext) {
        LOG.info("ValidTokenValidatorDelegate is wired.");
        ValidTokenDelegate.applicationContext = applicationContext;
    }
 
    @Override
    public void initialize(ValidToken constraintAnnotation) {
        constraintValidator = (ConstraintValidator) applicationContext.getBean("validTokenValidator");
        constraintValidator.initialize(constraintAnnotation);
    }
 
    @Override
    public boolean isValid(TokenTO value, ConstraintValidatorContext context) {
        return constraintValidator.isValid(value, context);
    }
}

The delegate is placed in the API module. While this is not conceptually the right place, it fixes the dependency cycle.
The class is annotated using @Component to assure that spring wiring takes place. The wiring is done using a setter which sets a static field. This makes the field available in all instances. This is likely to cause problems if you try to change you application context without restarting the application.
Note that the application context is autowired and not the validTokenValidator bean. This is mandatory. The bean validation framework allows caching validator instances, calling the initialize method only once and isValid more often (possibly from different threads). This requires a new ValidTokenValidator for each instance, so the bean needs to be a prototype scope and you have to assure a new bean is created for each delegate instance.

Edit Nov 2015: there seems to be another way, see http://www.twitlonger.com/show/n_1snrbeo

State based setting of fields in CRUD (REST) services

In CRUD REST services it often happens that the data which can be requested or saved or updated in not always the same. It can depend on various aspects like the role of the user or the state of the object in question.

Let’s see how this can be solved declaratively using jTransfo. The actual services use transfer objects. While this introduced a step to copy the fields (which would not exist in the case of detached objects) this explicitly allows control over the data to copy, possibly changing the representation. Additionally this shields your services from changes in your domain model which allows you to keep your API stable.

For this example, let’s look at a proposal. This contains estimates of amounts (and unit price) of goods which are needed to do a particular job. When the proposal has been accepted by the customer, the actual amounts can be set on the estimates. This results in the following domain model.

Object model

A proposal can have a couple of states:

  • created: the proposal has not yet been approved. Estimates can be added but no executions.
  • approved: executions can be added, the estimated counts cannot be changed (adding new estimates with product link and no (zero) count is possible).
  • finished: no more changes allowed.

The checking when fields can be updated are handled declaratively in the domain objects, so the logic for the create, get and update methods (skipping validation) are quite lean.

@Autowired
private JTransfo jTransfo;
 
@Autowired
private proposalDao proposalDao;
 
@Transactional
public void create(ProposalTo proposalTo) {
    Proposal proposal = jTransfo.convertTo(proposalTo, ProposalTo.class, "CREATE");
    proposalDao.save(proposal);
}
 
@Transactional
public void update(ProposalTo proposalTo) {
    Proposal proposal = proposalDao.get(proposalTo.getId());
    proposal = jTransfo.convert(proposalTo, proposal, proposal.getState());
    proposalDao.update(proposal);
}
 
@Transactional(readOnly = true)
public ProposalTo get(long proposalId) {
    Proposal proposal = proposalDao.get(proposalId);
    return jTransfo.convertTo(proposal, ProposalTo.class);
}

The magic is in the tags passed (last parameter) to the jTransfo convert methods in save and update methods combined with annotations in the transfer object definitions. For creating “CREATE” is passed and the object state is used when the object already existed. The methods to change the state of the object are not shown here.

In the transfer objects, MapOnly annotations are used to indicate when (based on the tags) fields can be copied. A field without MapOnly annotation is always copied (unless it is marked with @NotMapped). When a MapOnly annotation exists, it is only copied when the convert contains a tag which is mentioned in the annotations. Defining several MapOnly annotations is possible by grouping inside a MapOnlies annotation. For fallback, “*” matches all tags.

The transfer object definitions use lombok to generate getters and setters.

@Data
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
@DomainClass("pkg.Proposal")
public class ProposalTo extends AbstractChangeLoggedIdentifiedTo {
 
    @MapOnlies({
            @MapOnly(value = { "CREATE" }),
            @MapOnly(value = "*", readOnly = true)
    })
    private String description;
 
    @MappedBy(readOnly = true)
    private String state; // "CREATED", "APPROVED", "FINISHED"
 
    @MapOnlies({
            @MapOnly(value = { "CREATE", "CREATED", "APPROVED" }),
            @MapOnly(value = "*", readOnly = true)
    })
    @MappedBy(typeConverter = "estimateToList")
    private List<EstimateTo> estimates = new ArrayList<EstimateTo>();
}
  • Description can only be set when creating.
  • State is read-only.
  • Estimates can be changed as long as the object is not in finished state. Contents controled below.
@Data
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
@DomainClass("pkg.Estimate")
public class EstimateTo extends AbstractIdentifiedTo {
 
    @MapOnlies({
            @MapOnly(value = { "CREATE", "CREATED" }),
            @MapOnly(value = "APPROVED", typeConverter = "newReadOnlyDomain" ),
            @MapOnly(value = "*", readOnly = true)
    })
    @MappedBy(typeConverter = "readOnlyDomain")
    private ProductTo product;
 
    @MapOnlies({
            @MapOnly(value = { "CREATE", "CREATED"  }),
            @MapOnly(value = "*", readOnly = true)
    })
    private Double estimatedCount;
 
    @MapOnlies({
            @MapOnly(value = { "APPROVED" }),
            @MapOnly(value = "*", readOnly = true)
    })
    @MappedBy(typeConverter = "executionToList")
    private List<ExecutionTo> uitvoeringen = new ArrayList<ExecutionTo>();
}
  • The product (link, the readOnlyDomain type converter does not recurse to updating the product fields) can be set when the proposal is not approved. Once the proposal is approved, the “newReadOnlyDomain” converter assures that the field can only be set when it was null, allowing the product to be set on new estimate objects.
  • Estimated count can only be set as long as the proposal is not approved.
  • Executions can only be set if the proposal is approved (but not finished).

For completeness, the product and execution transfer objects are also displayed. The product records are not updated as this used the “readOnlyDomain” type converter. The execution has no restrictions.

@Data
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
@DomainClass("pkg.Product")
public class ProductTo extends AbstractIdentifiedTo {
 
    private String description;
    private Double unitPrice;
}
@Data
@ToString(callSuper = true)
@EqualsAndHashCode(callSuper = true)
@DomainClass("pkg.Execution")
public class ExecutionTo extends AbstractIdentifiedTo {
 
    private double count;
    private Date date;
}

Profiling a live application

I need profiling information. I want to check out the performance of the application and see whether time is spent more in front-end, back-end, database,… Running a full blown profiler on the application slows things down way too much. This makes it difficult for getting a general view and definitely cannot be used in a real test environment (let alone production).

As an alternative, I used geomajas-project-profiling to gather the profiling data. This provides a spring bean to register invocations. These registrations are combined (thanks to the LMAX Disruptor). You can get the information using JMX, but I also implemented a REST service to view the information.

++++ REST calls:
                                                             |          3 |         14 |       4.67
                                                       group |      count | total time |   avg time
                                 GebiedService:getAfdelingen |          1 |         11 |      11.00
                       GebruikerService:syncGebruikerWithPno |          1 |          2 |       2.00
                                    ProfilingService:profile |          1 |          1 |       1.00
 
 
++++ JDBC calls (by method):
                                                             |         68 |          7 |       0.10
                                                       group |      count | total time |   avg time
                                           Connection.commit |          8 |          2 |       0.25
                                      Connection.getMetaData |          6 |          0 |       0.00
                                 Connection.prepareStatement |          6 |          0 |       0.00
                                   PreparedStatement.execute |          6 |          3 |       0.50
                              PreparedStatement.executeQuery |          9 |          2 |       0.22
                                             Statement.close |          6 |          0 |       0.00
                                        Statement.getMaxRows |          9 |          0 |       0.00
                                    Statement.getMoreResults |          6 |          0 |       0.00
                                      Statement.getResultSet |          6 |          0 |       0.00
                                    Statement.getUpdateCount |          6 |          0 |       0.00
 
 
++++ JDBC calls (by REST service):
                                                             |         68 |          7 |       0.10
                                                       group |      count | total time |   avg time
                                                             |         36 |          5 |       0.14
                                 GebiedService:getAfdelingen |         27 |          2 |       0.07
                       GebruikerService:syncGebruikerWithPno |          5 |          0 |       0.00

As you can see above, I wanted to monitor the REST invocations (implemented using RESTEasy) and JDBC invocations.

Basics, add profiling containers

The bits are wired together using Spring Framework. You do need to declare the profiling containers, which are beans which combined the registrations and allow reading the invocation count and total time spent.

<bean name="restProfiling" class="org.geomajas.project.profiling.service.ProfilingContainer">
    <property name="ringSize" value="128" />
</bean>
<bean name="jdbcServiceProfiling" class="org.geomajas.project.profiling.service.ProfilingContainer">
    <property name="ringSize" value="128" />
</bean>
<bean name="jdbcMethodProfiling" class="org.geomajas.project.profiling.service.ProfilingContainer">
    <property name="ringSize" value="128" />
</bean>
<bean name="gatewayServiceProfiling" class="org.geomajas.project.profiling.service.ProfilingContainer">
    <property name="ringSize" value="128" />
</bean>
<bean name="gatewayMethodProfiling" class="org.geomajas.project.profiling.service.ProfilingContainer">
    <property name="ringSize" value="128" />
</bean>
 
<!-- Add bean to profile JDBC calls -->
<bean class="myproject.profiling.JdbcProfiling" />

The last bean is included to allow logging JDBC calls.

Log REST invocations

The application uses RESTEasy. In RESTEasy, you can register interceptors to gather the required information. The @Provider and @ServerInterceptor annotations assure that the interceptor is picked up.

The interceptors only have pre and post-process hooks, no around hook, so thread local variables are used to store the invocation start time and profile group. For the profile group a combination of class and method name of the invoked method are used. I first wanted to use the REST URL, but as this includes some parameters it would either create too many groups or require some additional processing to strip the parameters.

The actual registration is done in the postProcess() method.

@Provider
@ServerInterceptor
@Component("loggerInterceptor")
public class ProfilingInterceptor
        implements PreProcessInterceptor, PostProcessInterceptor {
    // String indicating the grouping for the profiling. Each service handled independently..
    public static final ThreadLocal<String> PROFILE_GROUP = new ThreadLocal<String>();
 
    // Service request URL.
    public static final ThreadLocal<Long> START_MOMENT = new ThreadLocal<Long>();
 
    @Autowired
    @Qualifier("restProfiling")
    private ProfilingContainer profilingContainer;
 
    @Override
    public ServerResponse preProcess(HttpRequest request, ResourceMethod method)
            throws Failure, WebApplicationException {
        START_MOMENT.set(System.currentTimeMillis());
        PROFILE_GROUP.set(method.getMethod().getDeclaringClass().getSimpleName() + ":" + method.getMethod().getName());
        return null;
    }
 
    @Override
    public void postProcess(ServerResponse response) {
        long now = System.currentTimeMillis();
        profilingContainer.register(PROFILE_GROUP.get(), now - START_MOMENT.get());
 
        PROFILE_GROUP.remove();
        START_MOMENT.remove();
    }
}

Log JDBC invocations

geomajas-project-profiling includes a JDBC driver delegate which can be used to profile JDBC calls. However, you still need to connect that with your profiling container. Using this driver, you can use a JDBC URL like “profiling:jdbc:postgresql://localhost:5432/db” instead of “jdbc:postgresql://localhost:5432/db” to enable the profiling. This works for any JDBC driver, not only the PostgreSQL one.

This introduces a small complication. You now need two JDBC drivers to be available in your system. This is solved by using a custom class (JdbcProfiling) as driver, and make this load the real JDBC and profiling drivers.

This class is registered as service in your Spring application context (see above) to assure that it knows the profiling containers to use and register those in the profiling driver.

In this case, I don’t just log using the JDBC method calls (the default groups passed by the profiling driver), but also using the REST calls as groups to make it possible to correlate slow REST calls with their time in the database.

public class JdbcProfiling implements ProfilingListener, InitializingBean {
 
    @Autowired
    @Qualifier("jdbcMethodProfiling")
    private ProfilingContainer jdbcMethodProfilingContainer;
 
    @Autowired
    @Qualifier("jdbcServiceProfiling")
    private ProfilingContainer jdbcServiceProfilingContainer;
 
    static {
        // trick to force real JDBC drivers to be loaded
        try {
            Class.forName("org.geomajas.project.profiling.jdbc.ProfilingDriver");
        } catch (ClassNotFoundException cnfe) {
            throw new IllegalStateException("Missing Profiling driver.");
        }
        try {
            Class.forName("org.postgresql.Driver");
        } catch (ClassNotFoundException cnfe) {
            throw new IllegalStateException("Missing PostgreSQL driver.");
        }
    }
 
    /**
     * Register the profiling listener.
     */
    public void afterPropertiesSet() {
        ProfilingDriver.addListener(this);
    }
 
    @Override
    public void register(String group, long durationMillis) {
        jdbcMethodProfilingContainer.register(group, durationMillis);
        jdbcServiceProfilingContainer.register(ProfilingInterceptor.PROFILE_GROUP.get(), durationMillis);
    }
}

REST service to display the data

To finish, let’s define a simple service which allows getting the data and resetting the counters.

@Path("/profile")
public interface ProfilingService {
 
    /**
     * Get the status of the application.
     *
     * @param clear should the counters be reset (after showing the current values)
     * @return The status
     */
    @GET
    @GZIP
    @Produces("text/plain")
    String profile(@QueryParam("clear") @DefaultValue("false") boolean clear);
}

The implementation is quite easy, just loop the information from the profiling containers and format this nicely.

@Component
public class ProfilingServiceImpl implements ProfilingService {
 
    private static final String LINE_FORMAT = "%60s | %10d | %10d | %10.2f\n";
    private static final String LINE_HEADER = "                                                       group" +
            " |      count | total time |   avg time\n";
 
    @Autowired
    @Qualifier("restProfiling")
    private ProfilingContainer restProfilingContainer;
 
    @Autowired
    @Qualifier("jdbcMethodProfiling")
    private ProfilingContainer jdbcMethodProfilingContainer;
 
    @Autowired
    @Qualifier("jdbcServiceProfiling")
    private ProfilingContainer jdbcServiceProfilingContainer;
 
    @Override
    public String profile(boolean clear) {
        StringBuilder sb = new StringBuilder();
 
        sb.append("++++ REST calls:\n");
        append(sb, restProfilingContainer);
        sb.append("\n++++ JDBC calls (by method):\n");
        append(sb, jdbcMethodProfilingContainer);
        sb.append("\n++++ JDBC calls (by REST service):\n");
        append(sb, jdbcServiceProfilingContainer);
 
        if (clear) {
            restProfilingContainer.clear();
            jdbcMethodProfilingContainer.clear();
            jdbcServiceProfilingContainer.clear();
        }
 
        return sb.toString();
    }
 
    private void append(StringBuilder sb, ProfilingContainer profilingContainer) {
        ProfilingData total = profilingContainer.getTotal();
        sb.append(String.format(LINE_FORMAT, "",
                total.getInvocationCount(), total.getTotalRunTime(), total.getAverageRunTime()));
        sb.append(LINE_HEADER);
        for (GroupData groupData : profilingContainer.getGroupData()) {
            sb.append(String.format(LINE_FORMAT, groupData.getGroup(),
                    groupData.getInvocationCount(), groupData.getTotalRunTime(), groupData.getAverageRunTime()));
        }
        sb.append("\n");
    }
}