Results for category "Java"

54 Articles

How to programtically insert versionized initial data into Spring Boot applications

One of the common required tasks for an application using a persistence store is to initialize the underlying database with basic data sets. Most of the time this contains something like admin users or default roles.

Setting the stage

To give a proper example, we have the database table role with two columns id (primary key) as an internal ID and uuid (primary key) as an external key.
In Liquibase, our changeset for this table has the following definition:

	<changeSet author="schakko" id="schema-core">
		<createTable tableName="role">
			<column name="id" type="BIGSERIAL" autoIncrement="true">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_id" />
			</column>
			<column name="uuid" type="UUID">
				<constraints nullable="false" primaryKey="true" unique="true"
					uniqueConstraintName="unq_role_uuid" />
			</column>
			<column name="name" type="varchar(255)">
				<constraints nullable="false" unique="true" />
			</column>
		</createTable>
	</changeSet>

My requirements are:

  • I want to add multiple custom roles into this table
  • The uuid field must be randomly generated
  • The schema definition must work on H2 and PostgreSQL without the uuid-ossp module. Our application backend is responsible for the generation of UUIDs.

Initializing databases with Spring Boot’s native features

With Java, specifically Spring Boot, there are two ways to initialize the database:

  1. Hibernate, and therefore Spring Boot with JPA, checks for a file named import.sql in the root of the classpath. This file is executed on startup when Hibernate creates the schema.
  2. The file data.sql, respectively data-${platform}.sql for concrete DBMS’, are used for importing SQL data by using the pure JDBC datasource without using any JPA stuff.

For simple tasks, both options are feasible. But in our case it can’t fulfil the requirements: A common SQL UUID generator function like generate_uuid() does not exist and probably won’t ever be standardized in SQL. So we need two separate data.sql files, one for each database management system. In addition to that, we still don’t have access to the OSSP module for generating a UUID in PostgreSQL.

Inserting data programtically

Why not using a simple ApplicationListener to generate the roles during the startup of the Spring framework?

@RequiredArgsConstructor
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class InsertRoleStamdata implements ApplicationListener<ApplicationReadyEvent> {
	@NonNull
	private final RoleRepository roleRepository;

	public void onApplicationEvent(ApplicationReadyEvent event) {
		if (roleRepository.count() > 0) {
			return;
		}

		roleRepository.save(new Role("ADMIN", java.util.UUID.randomUUID()));
	}
}

This does obviously work and is executed on every application’s startup. With the if condition, we ensure that we only insert a role if there is no role present yet.
But what happens if the role ADMIN has to be renamed to ADMINISTRATOR? If you think about it, the code above can rapidly change into some ugly monster with various condition checkings and edge cases. In the case you want to refactor it to split a migration into different classes, you have to retain the order of the executed listener and so on.
And besides of this, we need some traceable versionining.

Using a schema migration tool

For obvious reasons, a schema migration tool like Liquibase or Flyway should be the way to go. But how can it fulfil our requirements?

In Liquibase we can define a changeset which uses the insert tag:

    <changeSet author="schakko" id="role-stamdata">
        <insert tableName="role">
            <column name="uuid" value="${random_uuid_function}"/>
            <column name="name" value="ADMIN"/>
        </insert>
    </changeSet>

This is fine, but as already mentioned:

Neither Flyway nor Liquibase are able to interpolate a variable placeholder (like ${random_uuid_function}) with a function callback defined in Java.

Using a schema migration tool programatically

Fortunately, Flyway and Liquibase both support programatically defined changesets: You can write Java code which executes the SQL statement. In Liquibase you have to use the customChange tag. The following code snippet describes the required definition in YAML:

databaseChangeLog:
     - changeSet:
         id: create-default-roles
         author: schakko
         changes:
             - customChange:
                 class: de.schakko.sample.changeset.DefaultRoles20171107

The class de.schakko.sample.changeset.DefaultRoles20171107 must implement the interface CustomTaskChange:

public class DefaultRoles20171107 implements CustomTaskChange {

	@Override
	public String getConfirmationMessage() {
		return null;
	}

	@Override
	public void setUp() throws SetupException {
	}

	@Override
	public void setFileOpener(ResourceAccessor resourceAccessor) {
	}

	@Override
	public ValidationErrors validate(Database database) {
		return null;
	}

	@Override
	public void execute(Database database) throws CustomChangeException {
		JdbcTemplate jdbcTemplate = new JdbcTemplate(new SingleConnectionDataSource(((JdbcConnection)database.getConnection()).getUnderlyingConnection(), false));
		jdbcTemplate.update("INSERT INTO role (uuid, name) VALUES(?, ?,)", new Object[] { java.util.UUID.randomUUID(), "ADMIN" });
	}

}

Liquibase’s Spring Boot auto-configuration is executed in an early stage in which Hibernate is not loaded. Because of this we can’t inject any Spring Data JPA repositories by default. Even accessing the Spring context is not so easy. You need to provide the application context through a static attribute and so on.
With Flyway the Spring integration is much better.

Conclusion

This blog post demonstrated how initial data can be inserted into a Spring Boot application’s database. In addition to that we discussed how this data can be versionized in a database-independent manner.

Executing a CQL wildcard search in CMDBuild’s REST API

For our internal search engine I am currently developing a simple microservice to make our CMDBuild instance searchable. The microservice provides a fairly simple JSON API which itself queries the REST API of CMDBuild. Because of the insufficient documentation of CMDBuild I had to dig into the the source how to write a wildcard search query. CMDBuild has its own query language called CQL (CMDBuild Query Language). The CQL statements are converted into SQL which can be executed natively by PostgreSQL. CQL does also allow to include native SQL statements into the CQL queries. Native SQL statements are masked with (/( … )/). Between us, the combination CQL and SQL produces a absolute messy code, but this another story.

One problem is, that the REST search API of CMDBuild is exposed through HTTP GET. Accessing the HTTP endpoint with a filter like

GET https://cmdbuild/services/rest/v2/cql?filter={CQL: "from Department where Description (/(LIKE 'Develop%')/)"}

does unfortunately confuse the Apache CXF interceptor which struggles upon the percent sign. Encoding the percent does not help and a POST request is not allowed.

To fix this problem I took a look into the source of CMDBuild. Luckily for me the CQL parser is automatically generated with help of ANTLR. The grammer file is much better than any incomplete example from the official forum. So I discovered that CQL natively provides the following operators: CONTAINS, BEGIN, END, BETWEEN, NULL.
In the end it worked as I had expected:

GET https://cmdbuild/services/rest/v2/cql?filter={CQL: "from Department where Description CONTAINS 'Develop'"}

Executing Liquibase database migrations from command line and as a shared Maven JAR

I am currently working on the migration of our time tracking system from Microsoft SQL Server/.NET to Java. Most of the logic resides in Stored Procedures and Stored Functions inside the database schema. Because of some reasons (testability, maintainability, migration from MSSQL to PostgreSQL in a far future) the whole logic must be converted to Java. As the system has a high criticality all of the end user applications must be running parallel. There are 4 tools in total, written in different languages: an old PHP web application, a bridge from JIRA to our time tracking system, another JIRA-to-time-tracker converter and the original C#/.NET fat client. All systems will be migrated bit by bit to the new Spring Boot web application.

Using Liquibase for database versioning

After collecting information about the current application environment I noticed that there were no database versioning system in use. The installation of the MSSQL schema was a pain: there were a lot of plain SQL files which had to be executed by hand. Since Java was the target programming language I decided to use Liquibase and moved the whole SQL scripts into a new Git repository, added the pom.xml and wrote a self-explaining Readme.md how the .NET developers had to use Liquibase.

Running Liquibase migration with the command line Maven plug-in

I decided to describe only the pure Maven approach and not the Liquibase installation. The execution of the Liquibase migrations are trivial and no dependencies had to be installed by hand. The pom.xml contained the following definitions:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>my.package</groupId>
	<artifactId>database-schema</artifactId>
	<version>1.0.0</version>
	<packaging>jar</packaging>
	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
	</properties>
	<dependencies>
		<dependency>
			<!--  from internal Artifactory; Microsoft does not make the sqljdbc4.jar available in official repositories -->
			<groupId>com.microsoft.sqlserver</groupId>
			<artifactId>sqljdbc4</artifactId>
			<version>4.0</version>
		</dependency>
	</dependencies>
	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-resources-plugin</artifactId>
				<version>2.7</version>
				<configuration>
					<encoding>UTF-8</encoding>
				</configuration>
			</plugin>
			<plugin>
				<groupId>org.liquibase</groupId>
				<artifactId>liquibase-maven-plugin</artifactId>
				<version>3.0.5</version>
				<configuration>
					<propertyFile>src/main/resources/liquibase/liquibase.properties</propertyFile>
				</configuration>
				<executions>
					<execution>
						<goals>
							<goal>update</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>
</project>

The liquibase.properties file did only contain default values and is not important. With the configuration above I were able migrate the MS SQL schema with

mvn -Dliquibase.url=jdbc:sqlserver://$HOST:1433;databaseName=$DATABASE-Dliquibase.username=$USERNAME -Dliquibase.password=$PASSWORD liquibase:update

At this point I could update the database schema by hand. This was necessary when someone had to develop inside a .NET environment or we had to migrate a staging or production database.

Making the schema available for developer environments

I realized quickly that the approach did not work really well for my Java development environment. A lot of database migrations had to be developed and the integration test environments should be automatically in sync with the defined migrations. My idea was to let Jenkins push the Liquibase defined schema as a Maven JAR into our internal Artifactory. I should be able to include the JAR as a normal Maven dependency and let Spring’s Liquibase integration to execute the latest migrations.

Reference a db-changelog.xml inside a JAR in your application.properties

I took a look in LiquibaseProperties and saw that the changeLog attribute supports the resource syntax. All I had to do was defining the db-changelog.xml by adding the following line to the application.properties:

liquibase.change-log=classpath:/liquibase/db-changelog.my-app.xml

Please note that I changed the filename from db-changelog.xml to db-changelog.my-app.xml. This should prevent ordering issues if there is already another XML file present with the same file name. The classpath prefix is used by Spring to scan all JARs in the classpath for the requested path.

Do not use the full path name of included SQL files

As I mentioned above all SQL statements resided in their corresponding SQL files. I used the following definition in the db-changelog.my-app.xml to include the SQL files:

	<changeSet id="5" author="ckl">
		<comment>Bla</comment>
		<sqlFile dbms="mssql" encoding="utf8"
			path="install/20151126-005_sp_calculate_worklogs.sql"
			relativeToChangelogFile="true"
			splitStatements="true" />
	</changeSet>

This worked if Liquibase was either executed only through the Maven command line or as a Maven JAR dependency, but not both.

How Liquibase calculate database migration differentials

Liquibase iterates through all changesets defined in your db-changelog.xml. The XML attributes id, author and path are used at first to check whether this migration already exists in the database table DATABASECHANGELOG. If a row with the given parameters does exist, the checksum of the SQL file is calculated by normalizing the SQL file content (replacing new lines and so on). After that a MD5 checksum is generated by using the header and the content of the file.

The content of the “path” attribute differs

When executing mvn … liquibase:update inside the Git repository, the column path is filled with src/main/resources/liqiuibase/install/20151126-005_sp_calculate_worklogs.sql. Executing the migrations during the Spring startup process will result in a value classpath:/liquibase/install/20151126-005_sp_calculate_worklogs.sql for the path columns.
This means that every migration will be executed again, resulting in DDL errors.

Ignoring the path attribute

The easiest way was to use the attribute logicalFilePath in my databaseChangeLog tag. This forces all rows to have the same value of the path column:

<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ext="http://www.liquibase.org/xml/ns/dbchangelog-ext"
	xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-3.0.xsd
    http://www.liquibase.org/xml/ns/dbchangelog-ext http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-ext.xsd"
	logicalFilePath="path-ignored">
....
</databaseChangeLog>

In the DATABASECHANGELOG table the column path is filled with path-ignored.

Do not mix up different Liquibase versions

After I had fixed the previous error Liquibase showed that the calculated checksum of the files differed. At first I thought I had some encoding issues and forced everything to UTF-8 but the error was still there. It took a while until I noticed that the Maven dependency liquibase-core in the Spring Boot app and the Maven Liquibase plugin for command line execution had different versions (3.3.2 versus 3.0.5). Both versions calculates the MD5 checksum in different ways. The checksum inside the DATABASECHANGELOG table differed with the newly calculated checksum. All I had to do was changing the Liquibase Maven plug-in to use the same version:

	<build>
		<plugins>
			<plugin>
				<groupId>org.liquibase</groupId>
				<artifactId>liquibase-maven-plugin</artifactId>
				<!-- same version -->
				<version>3.3.2</version>
				<configuration>
					<propertyFile>src/main/resources/liquibase/liquibase.properties</propertyFile>
				</configuration>
				<executions>
					<execution>
						<goals>
							<goal>update</goal>
						</goals>
					</execution>
				</executions>
			</plugin>
		</plugins>
	</build>

TL:DR

I moved the definition of a Microsoft SQL Server database schema into its own repository, made the schema migratable with help of Liquibase and made it executable in standalone/command line mode and as a Maven JAR dependency.

ExceptionHandler of @ControllerAdvice is not executed

It happened again: after writing about some issues caused by different JVM class-loader order a similar problem occured on Friday. One of my collagues (Dev-A) asked me to look into a problem the team had. Because of unknown reasons the Spring Boot based application did not return a serialized JSON error object after a @Valid annotated controller method parameter had been validated.

@Controller
public class MyController {
	// Validator for MyDto (MyDtoValidator) got called
	@RequestMapping("/validate")
	public @ResponseBody MyData myMethod(@Valid MyDto myDto) {
		return new MyData()
	}
}

An @ControllerAdvice annotated class transformed any validation error into a new exception. This has been done to unify the validation errors when using Spring Data REST and Spring MVC validation.

@ControllerAdvice
public class ValidationErrorHandlerAdvice {

	private MessageSourceAccessor messageSourceAccessor;

	@Autowired
	public ValidationErrorHandlerAdvice(MessageSourceAccessor messageSourceAccessor) {
		Assert.notNull(messageSourceAccessor, "messageSourceAccessor must not be null");

		this.messageSourceAccessor = messageSourceAccessor;
	}

	@ExceptionHandler({ MethodArgumentNotValidException.class })
	@ResponseStatus(HttpStatus.BAD_REQUEST)
	@ResponseBody
	public RepositoryConstraintViolationExceptionMessage handleValidationErrors(Locale locale,
			MethodArgumentNotValidException exception) {
		// this method should be called if the validation of MyController.myMethod had failed
		return produceException(exception.getBindingResult());
	}

	@ExceptionHandler({ BindException.class })
	@ResponseStatus(HttpStatus.BAD_REQUEST)
	@ResponseBody
	public RepositoryConstraintViolationExceptionMessage handleValidationErrors(Locale locale,
			BindException exception) {
		return produceException(exception.getBindingResult());
	}

	private RepositoryConstraintViolationExceptionMessage produceException(BindingResult bindingResult) {
		return new RepositoryConstraintViolationExceptionMessage(
				new RepositoryConstraintViolationException(bindingResult), messageSourceAccessor);
	}
}

All in all, the controller advice itself looked fine to me, especially as the code is easy to understand and has been used in other projects too without any problems.

Empty HTTP response body

Nevertheless the behavior was mysterious:

  • When calling /validated in the browser, the custom validator for MyDto so the controller method got definitely hit. Nevertheless none of the exception handlers in the ValidationErrorHandlerAdvice got called. To make it more mysterious the HTTP response Spring generated did only consist of the HTTP status code 400 (Bad Request) without any character in the HTTP response body. The response body was completely clear.
  • Another developer (Dev-B) uses Linux as operating system. On his machine the code above worked without any problems and returned the expected HTTP status code 400 with the serialized JSON validation error object.

Dev-A has a Windows based machine. When he had called the “/validated” endpoint on Dev-Bs host the repsonse body contained the serialized validation error. In return, when Dev-B (Linux) had called “/validated” on Dev-As machine (Windows) the response body was empty.
I checked the HTTP request headers of both browsers but they were more or less the same and did not have any influence on any HTTP pre-filters Spring had registered. Both environments uses the Oracle JDK with different update releases (u43 vs. u63). Patching both JDKs to the same level I wanted to try at last as it seemed unlikely to be the reason.

Debugging session

I started to debug through the Spring Framework and realized that the order in which the registered exception handlers got checked for their responsibility of the current occured exception was completely different. On Dev-Bs machine the ValidationErrorHandlerAdvice were the first in the list, on Dev-A the first responsible exception handler was located in ResponseEntityExceptionHandler.
After stepping further through ResponseEntityExceptionHandler it made absolutely sense that the response body was empty on Dev-As machine. But it does not made any sense that the ResponseEntityExceptionHandler got loaded in the first place.

After searching for more @ControllerAdvice annotated classes in the project I found this piece of code:

@ControllerAdvice
public class CustomErrorController extends ResponseEntityExceptionHandler {
	@ExceptionHandler()
	public ModelAndView notFound(HttpServletRequest req, Exception exception) {
		LOG.info(exception.getMessage());
		ModelAndView mav = new ModelAndView();
		// ... not so important ...
		return mav;
	}
}

Okay, at least the exception handler of ResponseEntityExceptionHandler was introduced without any Spring magic.

Fixing the problem

During debugging the initialization phase of Spring I saw that the order of the detected controller advices was different between both systems: CustomErrorController got registered before ValidationErrorHandlerAdvice on Dev-A and vice versa on Dev-B. As the wrong behavior only occured on Windows machines I assume that the underlying component scan is responsible for the different order.

In the end the fix for this solution was easy. I annotated both controllers with @Order and gave the ValidationErrorHandlerAdvice a higher precedence than CustomErrorController.

How to fix NoSuchMethodError or NoSuchMethodException

Yesterday my team had the situation that a deployment failed with a NoSuchMethodError, specifically the method com/google/common/collect/ImmutableList.copyOf could not be found while querying the Confluence REST API.

NoSuchMethodEror and NoSuchMethodException occur of obvious reasons: a method should be called during runtime but the providing class does not contain the method.

NoSuchMethodExceptions is thrown when the JVM tries to make a call to a method through Java Reflection API. NoSuchMethodError is thrown when the compiled Java code directly calls the method without using the Reflection API.
Because of its nature the reason for a NoSuchMethodException can be a syntactical issue (e.g.misspelled method name in getDeclaredMethod). If you receive the exception during development, please check the correct spelling of the method name you try to call through reflection.

There are mostly two reasons why this error occurs during runtime:

  • The method signature (method name and expected parameters) does exist nowhere in your classpath. There could be an issue in your deployment / packaging phase. For a simple web project which is packaged through Maven this is very unlikely. But if you try to use overlays with classes outside of your POM definition, there could your problem be located.
  • The method signature does exist mulitple times in your classpath. It means, you have different versions of the class in your classpath. The classes could have the same method names but can differ in the parameter list.
    It highly depends upon on the environment which of the classes in JAR files have precedence. There is no such JVM specification that a classloader has to either fetch JARs in alphabetical or last-touched order or use a first-come/first-serve last-come/first-serve order. For example, JAR files are loaded in Tomcat until <= 7 in alphabetical order. Tomcat 8 let the filesystem make the decision which JAR comes first (Order of loading jar files from lib directory).

To identify the source of the problem, navigate to the main classpath directory of your application (e.g. WEB-INF/lib) and execute

for jar in *.jar; do for class in $(jar -tf $jar | grep $CLAZZ.class | sed 's/.class//g'); do javap -classpath $jar -s $class | grep -A 2 $METHOD && echo $jar.$class; done; done

Replace $CLAZZ with the name of the class and $METHOD with the name of the method. The shell script above searches for every occurence of the method inside any of the JARs and prints out the different signatures.

  • If there is no result, you hit the first case: your deployment script did not include the required dependency.
  • If there are multiple results from different JAR files, you have to compare the stacktrace of your application logs with the output of the script. Check the dependency hierarchy of your Maven POM and exclude the version not containing the expected method signature.

In our case, I had mistakenly included google-collections-1.0 and guava-11.0.2 in a referenced JAR which both provide ImmutableList. google-collection is the older dependency and does not contain the copyOf method. In the development environment, the (Spring Boot) application has been always executed through the embedded application server. In production, the WAR was deployed inside a Tomcat 8 container. In the end we removed the google-collections from the referenced JAR and the issue has been fixed.

One last word from the Tomcat Bugzilla by Mark Thomas:

Applications that depend on JARs being searched for classes in a particular order are broken and should be fixed.

Doing integration tests with Arquillian and real mocked EJBs

Our current project uses JSF and CDI for the presentation layer. The business logic is encapsulated inside EJB with no-interface view as proposed by Adam Bien and others. I evaluated different alternatives for integration testing and ended up with Arquillian. For JSF/CDI based applications Arquillian is the best fit.

As I digged a little bit deeper into Arquillian one big problem occured: The usage of no-interface EJBs did not allow me to inject some CDI alternatives through the @Alternative annotation. @Alternative expects an interface which I did not have. In addition I had to use the @EJB annotation in the JSF backing bean because our target application server was WebSphere. Since the EJB container, for example JBoss for integration testing, expects all fields annotated with @EJB to be resolved and deployed, I would have to deploy the EJB with all its dependencies. In the end the whole application had to be deployed including database access and without being able to manipulate the result of the EJB methods.
Our data access layer uses JPA/Hibernate but can not make use of “plain” JQL because we had to access legacy stored procedures of an already existing Oracle database – in-memory testing with H2 or Derby was not possible. Another problem would have been the total duration of the integration tests. Our application has a certain complexity and with proceeding project progress the integration tests could not be executed any longer in an acceptable time span.

The only option would have been to switch from no-interface EJBs back to traditional @Local EJBs/interfaces. In the integration tests I would define a stub which implements the interface and deploy the stub with Arquillian. Nevertheless, dynamically controlling the behavior of this stub is not directly possible and I had to write a lot of stubs.

The whole situation did not make me happy. Doing integration tests with Arquillian should force me to change the architecture and introduce more complexity? This was an option I was unwilled to choose and so I searched for alternatives. Surprisingly, Google did not provided any solution. I thought about the problem again and had an idea: I could modify the Java bytecode of the EJB class before it is deployed. The modified EJB would only act as a facade and delegates every method call to an inner mock which has the same class methods as the facade.
After doing some research Javassist seemed to be the best tool for doing the bytecode manipulation. During the implementation of the desired bytecode modifier I struggled with some odd behavior of the application container but in the end I suceeded.

EjbMocker allows you to deploy a bytecode modified version of your EJB to be injected by Arquillian into your application server. You can completely control the behavior of the EJB with help of Mockito. Every method of the EJB is forwarded to an internal mocked instance with the same class signature.

An example project can be found at https://github.com/schakko/arquillian-warp-mocked-ejb. The EjbMocker contains usage instructions so I won’t repeat it here.

Unit tests inside Eclipse succeed, unit tests in Maven fail. WTF?

Our new project makes use of Maven as build management tool. Eclipse (STS edition) is used for the development process. A part of the project consists of a transformation process which converts XML files to Java POJOs. Because of the given XML structure we used JAXB in combination with EclipseLink MOXy for this.

After a few weeks of initial development, mainly architectural decisions I prepared our TeamCity instance. The first TeamCity build failed because some of the unit tests throw unexpected errors. I must admit that until this time I had only executed the unit tests through Eclipse and every test case had passed without any problem. My local command line Maven builds were triggered with -DskipTests=true and succeeded too.

The failed build in TeamCity occured through the following JAXB error:

com.sun.xml.internal.bind.v2.runtime.IllegalAnnotationsException: 1 counts of IllegalAnnotationExceptions
org.springframework.integration.Message is an interface, and JAXB can't handle interfaces.
	this problem is related to the following location:
		at org.springframework.integration.Message
		at public org.springframework.integration.Message ...

I repeated the test suite on my local machine (mvn test) and the first time it ran it succeeded. Eclipse passed the unit tests, too. At first I suspected different Java/JDK versions on my local machine and the build server, but the versions were the same. So I started with a fresh mvn clean test on my machine and the build failed, too. WTF? Now running the compiled unit test and the source code in Eclipse although resulted in the error above. Re-compiling the code with Eclipse fixed the errors. Eclipse uses Eclipse Compiler for Java (ECJ) during compilation and not javac of the JDK. Could it be a compiler bug? The byte code of both .class files (Maven compiled vs. Eclipse compiled) were more or less the same so this was not the answer.

During debugging the Maven compiled artifacts I noticed that the MOXy compiler was not hit, instead the default implementation was used. Could it be that the jaxb.properties file was not copied to the class path? jaxb.properties is read by JAXB for initializing/overwriting the default XML context factory. And indeed, the jaxb.properties was missing. ECJ copied the .properties file to the target directory but Maven ignored the file.

What did I learned from that?

  1. Fail early – Set up the build infrastructure on the first day of your project and don’t wait until a first prototype is available.
  2. Fail everywhere – Running the tests only in one environment (Eclipse) does not mean that it succeeds in other environments (pure Maven).
  3. Don’t skip tests – Waiting for a local build to succeed sucks. Skipping the unit tests makes it better, but a failed build in another environment (see 2.) although sucks.

.NET aus der Sicht eines Java-Entwicklers

Die letzten beiden Tage war ich damit beschäftigt, ein paar Evaluierungen für die .NET-Plattform zu machen. Eines unserer Projekte greift mit Hilfe eines (zugegebenermaßen ziemlich coolen) WPF-Frontends über WCF auf einen SOAP-Service zu, der die Verbindung zu einer MS SQL-Datenbank herstellt.

Logging auf die Konsole

Unter Java bzw. innerhalb eines Application-Servers ist es kein Problem, ein simples System.out.println() oder eben log.debug() zu benutzen. Die Ausgaben erscheinen dann jeweils in der Konsole der Entwicklungsumgebung. Wer jetzt denkt: so etwas geht doch sicherlich auch mit ASP.NET, der irrt gewaltig. Wenn innerhalb einer ASP.NET-Anwendung – hier zähle ich unseren WCF-Service einfach mal dazu – auf die Konsole geloggt wird (per log4nets Log.Debug oder aber Console.Write()), verschwinden die Ausgaben im Nirvana. Grund dafür ist, dass der IIS keine Konsole zur Verfügung stellt, in die überhaupt geloggt werden kann.

Entweder man benutzt nun die Trace-Methoden oder aber man schreibt die Logging-Ausgaben von log4net in eine Datei und setzt auf diese dann ein tail ab.

Lebenszyklus eines WCF-Service

Als JEE-Entwickler weiß ich, dass ein Servlet mit dem Start der Anwendung einmalig initalisiert wird und das Servlet dann so lange im Arbeitsspeicher gehalten wird, bis die Applikation heruntergefahren wird.

Bei WCF wird das Attribut InstanceContextMode benutzt, um die Gültigkeit eines Objekts zu beschreiben. Für einen WCF-Service gilt die Standardeinstellung, dass das Objekt (der Service) so lange vorgehalten wird, bis die gegenwärtige Session beendet wird (Einstellung PerSession).

Reihenfolge der Initalisierung von Servlets / Initialkonfiguration

In Anwendungen, die ich nicht mit Spring geschrieben habe, habe ich in meiner web.xml immer ein Servlet definiert, dass nichts weiter macht, als das die Standardeinstellungen gesetzt und z.B. der Logger initalisiert wird.

Bei .NET gibt es so etwas in der Art nicht (sollte ich falsch liegen, bitte ich um baldige Korrektur). Stattdessen muss man über eine selbst definierte ServiceHostFactory diese Konfiguration durchführen. Möglich wäre so etwas:

namespace My.Namespace.Service
{
    public class CustomServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory
    {
        private Boolean isInitialized = false;

        public override ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses)
        {
            initalize();
            return base.CreateServiceHost(constructorString, baseAddresses);
        }

        protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)
        {
            initalize();
            return base.CreateServiceHost(serviceType, baseAddresses);
        }

        private void initalize()
        {
            if (isInitialized)
            {
                return;
            }

            log4net.Config.BasicConfigurator.Configure();
           // Weitere Initialisierungen
        }
    }
}

In der jeweiligen .svc-Datei muss dann die Instanzierung des Service über die CustomServiceHostFactory geschehen:

<%@ ServiceHost Language="C#" Debug="true" Factory="My.Namespace.Service.CustomServiceHostFactory" Service="My.Namespace.Service" CodeBehind="Service.svc.cs" %>

Service-Verbindungen im Client

Hinten über bin ich gefallen, als ich in der app.config des Clients probierte, das Suffix des Service-Endpoints einmalig über

<add key="httpBaseAddress" value="http://localhost" />

zu definieren. Das Attribut httpBaseAddress ist nur auf der Serverseite verfügbar und führt dazu, dass Redundanzen durch doppelte URIs entstehen.
Korrekt schaut das Beispiel nun so aus:

<configuration>
    <appSettings>
    </appSettings>
    <system.serviceModel>
      <!-- CusomtBinding benutzen, da mit wsHttpBinding NTLM nicht korrekt funktioniert -->
        <bindings>
          <customBinding>
            <binding name="Binding_Service">
              <mtomMessageEncoding />
              <security authenticationMode="SecureConversation">
                <secureConversationBootstrap authenticationMode="SspiNegotiated" />
              </security>
              <httpTransport authenticationScheme="Ntlm"/>
            </binding>
          </customBinding>
        </bindings>
        <client>
          <endpoint address="http://localhost:52264/Service.svc"
              binding="customBinding" bindingConfiguration="Binding_Service"
              contract="My.Namespace.Service.IService" name="Binding_IService">
            <identity>
              <dns value="localhost" />
            </identity>
          </endpoint>
        </client>
    </system.serviceModel>
</configuration>

Im Client wird dann über

ChannelFactory ServiceFactory = new ChannelFactory("Binding_IService");

der Proxy des Services geladen.

Die obige Konfiguration (customBinding) zeigt übrigens den Zugriff auf den Web-Service via NTLM. Mit dem Binding wsHttpBinding funktioniert NTLM nämlich nicht.

Springs FrontController soll keine CSS-, PNG- oder JPG-Dateien verarbeiten oder Wie liefere ich statischen Content aus

Wenn man die grundlegende Funktionsweise von Spring MVC verstanden hat – ich empfehle an dieser Stelle die äußerst vorzügliche offizielle Dokumentation -, kommt man an den Punkt, an dem man auch Bilder oder Stylesheets in die Web-Applikation einbinden möchte. Dafür gibt es zwei Möglichkeiten.

Spezifizieren eines eigenen URL-Suffixes für Request-Mappings

Konzept

Alle Requests bzw. Aktionen, die von einem Controller verarbeitet werden sollen, werden auf ein bestimmtes Suffix gemappt.

Controller

@Controller
class MyController {
    @RequestMapping("/home.do")
    public void ModelAndView home() {
        return new ModelAndView("home");
    }

    @RequestMapping("/test.do")
    public void ModelAndView test() {
        return new ModelAndView("test");
    }
}

*-servlet.xml

<!-- ... Beans ... -->
<bean id="viewResolver"
      class="org.springframework.web.servlet.view.UrlBasedViewResolver">
    <property name="viewClass" value="org.springframework.web.servlet.view.InternalResourceView" />
    <property name="prefix" value="/WEB-INF/jsp/"/>
    <property name="suffix" value=".jsp"/>
</bean>
<!-- ... noch mehr Beans ... -->

web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	id="WebApp_ID" version="2.4"
	xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">

    <!-- andere Servlet-Definitionen ... ->
    <servlet>
          <servlet-name>dispatcher</servlet-name>
          <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
          <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
          <servlet-name>dispatcher</servlet-name>
          <url-pattern>*.do</url-pattern>
    </servlet-mapping>
</web-app>

Konzept

Anfragen auf URLs der Form *.do werden von dem FrontController verarbeitet. Die URL /context/home.do wird auf die Methode MyController.home() gemappt. Diese liefert die View mit dem Namen home zurück. Die View wird durch den InternalResourceViewResolver unter /WEB-INF/jsp/home.jsp gesucht.

Alle Anfragen, die nicht auf *.do enden (z.B. /context/images/bild.jpg, werden von dem Default-Servlet des Containers verarbeitet. Dieser sucht nach der Datei /images/bild.jpg und liefert diese zurück.

Vorteil bei dieser Methode kann sein, dass man sofort sieht, welche URLs auf Methoden gemappt sind. Allerdings entspricht die URL nicht mehr dem REST-Prinzip.

DefaultServlet wird auf die zu nutzenden Dateiendungen gemappt

Hierbei wird dem Default-Servlet des Servlet-Containers explizit gesagt, welche Dateiendungen er handeln soll.

Controller

@Controller
class MyController {
    @RequestMapping("/home")
    public void ModelAndView home() {
        return new ModelAndView("home");
    }

    @RequestMapping("/test")
    public void ModelAndView test() {
        return new ModelAndView("test");
    }
}

*-servlet.xml

Siehe erstes Beispiel

web.xml

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	id="WebApp_ID" version="2.4"
	xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">

    <!-- andere Servlet-Definitionen ... ->
    <!-- Beispiel für Tomcat -->
    <servlet>
          <servlet-name>defaultServlet</servlet-name>
          <servlet-class>org.apache.catalina.servlets.DefaultServlet</servlet-class>
    </servlet>

    <!-- Beispiel für Jetty -->
    <servlet>
          <servlet-name>defaultServlet</servlet-name>
          <servlet-class>org.mortbay.jetty.servlet.DefaultServlet</servlet-class>
    </servlet>

    <servlet-mapping>
          <servlet-name>defaultServlet</servlet-name>
          <url-pattern>*.css</url-pattern>
    </servlet-mapping>

    <servlet-mapping>
          <servlet-name>defaultServlet</servlet-name>
          <url-pattern>*.png</url-pattern>
    </servlet-mapping>

    <servlet>
          <servlet-name>dispatcher</servlet-name>
          <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
          <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
          <servlet-name>dispatcher</servlet-name>
          <url-pattern>/</url-pattern>
    </servlet-mapping>
</web-app>

Konzept

Dem Servlet-Container wird explizit gesagt, dass der statische Content (in unserem Beispiel sind das PNGs und CSS-Dateien) von dem DefaultServlet verarbeitet und zurückgegeben wird. Jede andere Anfrage wird vom FrontController verarbeitet. Vorteil bei dieser Methode ist, dass sich so schöne URLs bauen lassen. Die URL /home wird z.B. auf die Methode MyController.home() gemappt.

Einer der Nachteile ist, dass jeder statische Content, der im Projekt verwendet wird, in die web.xml eingetragen werden muss – wenn z.B. auch noch GIFs oder JPEGs benutzt werden sollen. Außerdem muss die web.xml für jeden Servlet-Container manuell angepasst werden. In der J5EE-Spezifikation existiert keine Notwendigkeit eines Default-Servlets. Bei Jetty und Tomcat wird zwar automatisch der Servlet-Name default für das Default-Servlet registriert, allerdings kann sich das in Zukunft ändern.

Fazit

Wie man statischen Content ausliefert, ist eine Frage des Geschmacks. Ich persönlich finde die zweite Methode eleganter, da sich dadurch REST-konforme URLs schreiben lassen.

AspectJ und Spring: Parameter einer Methode auslesen

Ich stand heute vor dem Problem, dass ich mit einem AspectJ-Advice die Parameter von Methoden überprüfen wollte. Abhängig von dem Parametertyp sollte eine Exception geworfen werden. Meine Exceptions erben von einer Basisklasse, die in Abhängigkeit des Parametertyps weitere Exception-Codes enthalten. So kann z.B. die UserException neben dem Code NOT_FOUND (Parameter ist null) noch weitere Codes wie z.B. EMAIL_ALREADY_EXIST besitzen.

Meine Methoden sahen in etwa so aus:

public void methodeA(User _a, boolean _b);

public void methodeB(AnderesDomaenenObjekt _b, User _a);

public void methodeB(User _a, User _b);

Leider gibt es momentan noch keine Möglichkeit, einen entsprechenden Pointcut zu definieren, dem die Reihenfolge der Parameter egal ist.

Der Pointcut

	@Before(value = "execution (* de.mypackage.service..*.*(.., de.mypackage.domain.User, ..)) && args(*, _user, *")

funktioniert so nämlich nicht.
Um das Problem momentan zu umschiffen, muss man einen generischen Pointcut definieren, der die Parameter der Methode und die übergebenen Objekte überprüft.
Folgender Code überprüft alle übergebenen Parameter einer Methode und beachtet auch die angehängten Annotations an den jeweiligen Parameter.

@Aspect
@Component
public class ParameterValidAspect
{
	@SuppressWarnings("unchecked")
	@Before(value = "execution (* de.mypackage.service..*.*(..))")
	@Order(AspectOrder.FIRST)
	public void parameterNullCheck(JoinPoint pjp) throws SecurityException,
			NoSuchMethodException
	{
		MethodSignature methodSignature = (MethodSignature) pjp.getSignature();
		String methodName = pjp.getStaticPart().getSignature().getName();
		Class<?>[] paramTypes = methodSignature.getParameterTypes();
		Annotation[][] annotationsByMethod = ((CodeSignature) pjp.getStaticPart()
				.getSignature()).getDeclaringType()
				.getMethod(methodName, paramTypes).getParameterAnnotations();
		Object[] methodArguments = pjp.getArgs();
		Object instance = null;

		for (int i = 0, m = paramTypes.length; i < m; i++)
		{
			instance = methodArguments[i];

			if (instance == null)
			{
				try
				{
					instance = paramTypes[i].newInstance();
				}
				catch (Exception e)
				{
				        // Instanzierungs-Exceptions handeln...
                                }
			}

			if (annotationsByMethod[i].length > 0)
				{
					for (Annotation annotation : annotationsByMethod[i])
					{
						// Annotation behandeln
					}
				}
			}

			if (methodArguments[i] == null)
			{
                                // ParameterNullException handelt dann den jeweiligen Objekttyp
				throw new ParameterNullException(instance);
			}
		}
	}
}

Noch ein allgemeiner Hinweise zur Benutzung von AspectJ in Zusammenhang mit dem Spring Framework: Werden Aspekte über die Annotation @Aspect beschrieben, muss der Aspekt auch mit @Component annotiert werden, damit sie per Auto-Discovery erkannt werden. Benutzt man @Component nicht, muss in der Spring-XML-Konfiguration der Aspekt instanziert werden!