Sunday, October 22, 2017

How to deal with Java EE Modules

I recently spent some time trying to figure out why Hammock wasn't working on Java 9.  It was a very weird error, but something people run into quite often.

Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

Errr ok.  I've been using JAX-B for a long time.  I never realized that it was part of Java EE.  It made sense though, given the javax namespace.  Apparently the fix was to add some configuration when running Maven to include JAX-B's module from the JDK (which I guess I always was using?)

mvn clean install -DargLine="--add-modules java.xml.bind"

The -DargLine passes additional options to the forked surefire JVM, --add-modules (plural!) adds a single module (plus its dependencies) to the classpath, otherwise it's not visible.  All of these are runtime issues, visible when the CDI runtime attempts to start.  So after making this change, I figured things would work.

Caused by: java.lang.ClassNotFoundException: javax.xml.ws.WebServiceFeature

Yay.  Now I remembered why I liked OSGi so much.  Apparently the fix for this one is to add the module java.xml.ws to the build.  So I went ahead and ran

mvn clean install -DargLine="--add-modules java.xml.bind --add-modules java.xml.ws"

Fair enough, another module.  Not too bad.  So after running that I was expecting more modules to be added, and got this output

Caused by: java.lang.ClassNotFoundException: javax.annotation.Priority

Ok, that's a weird one.  Why's it weird?  The @Priority annotation comes from javax.annotation-api JAR.  It's not provided by the system.  CDI runtimes will bring in this JAR (Weld uses the RI, OWB brings in the geronimo spec).  I spent about 10 hours over three days trying to dissect this one.  I tried all sorts of things to enable the javax.annotation module, even though it was on the classpath.  Before I reveal the solution, it's best to understand why the modules I explicitly enabled above are hidden.

These modules come from Java EE, yet the RI for them was in the JDK itself.  By placing the RI in the JDK, we open up external usage for these internal feature set.  We also set a precedent that all Java implementations will include these dependencies.  That can't be done if these modules are expected to be there but aren't.  We lose the build once, run anywhere mantra Java has followed for so long.  The good news though, each of the modules are available are standalone dependencies, distributed via Maven Central.

Anyways, back to how to fix this problem.  It turns out, when you enable the java.xml.ws module, it includes a large number of internal dependencies, including the internal JDK javax.annotation package.  One of the modular changes in the JDK is that a package name can only exist in one module.  Since javax.annotation was coming from the JDK and a JAR on the classpath, the JDK version was being used.  So how to deal with this? Well, it turns out for this use case you cannot rely on JDK provided modules.  By bringing in the actual maven dependencies for these JARs, you can deal with this consistently - not depending on any JDK internals but by publically available JARs.  This is all I had to bring in to make an application compile and run on Java 9:

<dependency>
    <groupId>javax.annotation</groupId>
    <artifactId>javax.annotation-api</artifactId>
    <version>1.3.1</version>
</dependency>
<dependency>
    <groupId>javax.xml.ws</groupId>
    <artifactId>jaxws-api</artifactId>
    <version>2.3.0</version>
</dependency>
<dependency>
    <groupId>javax.activation</groupId>
    <artifactId>activation</artifactId>
    <version>1.1.1</version>
</dependency>



The great thing, because of the transitive dependencies all needed JARs will work.

Sunday, October 1, 2017

Moving Java EE away from the JCP - an important first step

If you haven't already, sign up for the new EE4J mailing list

I was surprised as others to see this in one of the first few messages on list
It is my understanding that the new specification process will not be using the JCP.
 However, I believe it's an extremely important thing to realize how critical of a step this is.  When Oracle purchased Sun, they brought along everything that was there.  Included in this was the very Sun heavy JCP, which while being open was a pain if you weren't a Sun employee.  Much of this has remained the case with Oracle in the lead.  The JCP is run independently, but by seemingly Oracle staff.

While we've been able to get some JSRs through that are not lead by Oracle, it hasn't been many.  The intention to move EE4J away from the JCP is purely a positive sign that Oracle wants to let the community drive the initiatives.  It allows those who want a say and have a say to drive the direction of this new technology.  If I look at who is on the JCP EC, not all of them are involved in Java EE today.  So why should the future and effectively the vote on whether a Java EE specification is ready to be shipped put forth to a group who may not be interested in Java EE?

To me at least, this is a great sign, an important step, and really a clear indication that we need to look at things differently with EE4J than we ever did with Java EE.  I welcome the future and the innovation that this project is going to create.

Tuesday, June 21, 2016

Hibernate OGM + Apache DeltaSpike Data = Pure Love for NoSQL

If you haven't looked at it before, Hibernate OGM is a JPA implementation designed around NoSQL databases.  Since its a JPA implementation, it plugs in perfectly to Apache DeltaSpike's Data module.  The added support for NoSQL databases really is a strong suitor for cross platform support.

To start, we'll create a persistence.xml to represent our connection.


<persistence
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    version="2.1"
    xmlns="http://xmlns.jcp.org/xml/ns/persistence"
    xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd">
  <persistence-unit name="MyPU" transaction-type="RESOURCE_LOCAL">
    <!-- Use the Hibernate OGM provider: configuration will be transparent -->
    <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider>
    <properties>
      <property name="hibernate.ogm.datastore.provider" value="mongodb"/>
      <property name="hibernate.ogm.datastore.database" value="swarmic"/>
      <property name="hibernate.ogm.datastore.create_database" value="true"/>
    </properties>
  </persistence-unit>
</persistence>


This is copied pretty much verbatim from the user guide.  I'm using MongoDB.  Not for any particular reason other than I've used it before and it was already installed on my machine.  Next we'll create a repository and an entity:

@Entity
public class Employee {

    @Id
    @GeneratedValue(generator = "uuid")
    @GenericGenerator(name="uuid", strategy="uuid2")
    private String id;

    @Column(length = 40)

    private String name;
}

@Repository(forEntity = Employee.class)
public interface EmployeeRepository extends EntityPersistenceRepository{
    @Query("select e from Employee e order by e.name")
    Stream list();

}

Now we have a database layer that can save and retrieve entities.  Last thing we need is a REST endpoint to expose this.  To do that, we want to make sure its transactional since we're using resource local as well as providing some sane operations (create, list).  That endpoint probably looks like this

@Path("/")
@RequestScoped
@Produces("application/json")
@Transactional
public class EmployeeRest {
    @Inject
    private EmployeeRepository repository;
    @GET
    public Response list(@QueryParam("name") String name) {
        return Response.ok(repository.list().collect(toList())).build();
    }
    @GET
    @Path("/{id}")
    public Employee get(@PathParam("id") String id) {
        return repository.findBy(id);
    }
    @POST
    @Consumes("text/plain")
    public Response create(String name) {
        Employee employee = new Employee(name);
        Employee result = repository.save(employee);
        return Response.ok(result.getId()).build();
    }

}

So just like that, with in the ball park of 75 lines of code, we have a CDI based application that uses Hibernate OGM and Apache DeltaSpike that can list, get and create entities end to end.  That was super simple!

Sunday, May 29, 2016

Hammock 0.0.3 is Out!

Hammock 0.0.3 is out!

After more than 2 years, the next version of Hammock is finally out.  Some of the key changes here:

- Upgraded to the latest libraries of pretty much everything.
- Use a more modular build structure
- Introduction of a new security component
- Serving file system assets

If you're not familiar with Hammock, its a lightweight integration, using CDI, based on RestEasy, Undertow.  It's a quick and easy to use development framework to spin up small applications.


Full details on how to get started with Hammock can be found in the README

So what's next?  Well a few integrations are still in the works.

- JPA support, probably via Hibernate, likely to also provide migration support via Flyway
- Camel support based on the Camel CDI project
- Metrics support on the Metrics CDI project

Sunday, December 20, 2015

Picking the right Scope for your CDI Beans

I sometimes find people struggling with how to choose the right scope for your beans.  While it can be confusing, I hope that this article will help with your decision making to figure out which scope best applies to your beans.

What's available?


As of Java EE 7, your normal scopes are RequestScoped, SessionScoped, ApplicationScoped, ConversationScoped and TransactionScoped.  The first two are closely tied to the HTTP life cycle, typically when an HttpSession is started, you get the beginning of a Session Context available for use.  When a session is destroyed that Session Context ends.  Sessions are meant to be passivated and potentially serializable, so any SessionScoped objects you have must be serialization friendly, including their dependencies.  A Request Context exists in many places, but the most straight forward use case is around the duration of a single HTTP Request.  The various specs define a few other places where a Request Context is active - PostConstruct methods of Singleton EJBs, MDB invocations, etc.

A Conversation Context is started manually by a developer and is usually bounded within a shorter period than a containing context.  For example, you could create a conversation as a child to a request, and close it before the request is done.  The typical use case is around a single session, a set of requests that are all meant to be done together in a coordinated fashion.

ApplicationScoped beans are created once, the first time the application calls upon them, and are not created again.  Note that the scope of an application is somewhat ambiguous.  If you have just a WAR, even if it has some libraries without EJBs, it will share a context.  Anytime a JAR provides its own entry point (e.g. a JAX-RS endpoint, SOAP endpoint), that may be considered a separate application.  EARs are known for introducing this kind of problem, as multiple WARs and EJB JARs will likely create their own unique contexts, resulting in multiple Application Contexts being available.  When in doubt, if you can run a single WAR without EAR do it.

Transaction Scope was introduced in Java EE 7 as a part of JTA 1.2.  This context is activated within the bounds of a single transaction.

There are two other scopes, pseudo scopes you could say, Dependent and Singleton.  When working with CDI, Singleton is basically the same as ApplicationScoped, except that Singleton beans aren't eligible for interceptors, decorators.  Dependent has similar restrictions, but has an interesting caveat.  The injected bean shares its context with its injection point.  If you inject it in a RequestScoped bean, then the injected Dependent bean shares the Request Context.

State & Data vs Services and Operations


As an application is doing work, it needs to maintain some amount of state.  This could represent a user's session, entities being manipulated, a local cache of static resources.  This is different than a service that may be performing operations.

Domain-Driven Design considerations


Suppose that you are designing a shopping cart, one of the timeless classics in software development.  How would you model the ShoppingCart based on scopes, as well as building a rich model that supports the operations relevant to its use.  Consider these interfaces

public interface ShoppingCart {
    ShoppingCartItem addItem(Item item);
    Order checkout(BillingInformation billingInformation);
}

At a very high level, consider these behaviors as well:
- A shopping cart is persistent.  If I change it, and then leave the site, what I added should be available to me at a later point.
- Adding an item to my shopping cart is an atomic, synchronized, idempotent and request based transaction.  I'm only able to add one item at a time in a single request but I could open multiple browser tabs and make changes in tandem.
- Since these operations are atomic and state is persistent, the changes I make should be automatically written to a data store as a part of these operations.

Based on this information, as well as the tips above, how would you scope these domain objects?

Here's one approach.

- Your ShoppingCart is session scoped.  It's associated to a user, persistent but generally tied to a single user session.
- Your ShoppingCart is dependent on some kind of ShoppingCartService that is responsible for the persistence of the state of a shopping cart, and likely other services as needed (e.g. an OrderService for handling checkout w/ credit card information).  These services are ApplicationScoped and operate on these classes.
- Item is a RequestScoped bean that represents what you're trying to add to your cart.  It is built up in a single request and pushed into your cart.
- BillingInformation is a ConversationScoped bean, the data is built up in a few requests, and then calls checkout when all of the information is present.
- Neither Order nor ShoppingCartItem are managed beans.  They are created within the scope of their respective methods.

These domain classes together represent the state of your application, specifically how things are changing over time.  So how do you interact with them?

Controllers


If a controller is aware of the view and the backing model, what scope does it get?  If you're dealing with JAX-RS, the answer is pretty easy - RequestScoped resource classes are mandated in a JAX-RS/CDI environment.  You have a bit more leway with frameworks like JSF, but realistically a request scoped controller makes the most sense.
- It's bound to the HTTP lifecycle (assuming you're using HTTP)
- It represents the start of a request.
- It's the user's action of interacting with the system on a per request basis.

This doesn't mean you can't use a SessionScoped or TransactionScoped controller, but semantically it makes things clearer that it is request scoped.  Another thing to point out, your request scoped controller can still interact with your session scoped model.  Typically requests are made within a session, and thus give you access to the sessions contents as well as requet's contents.  That means this controller is valid

@RequestScoped
public class ShoppingCartController {
    @Inject
    private ShoppingCart shoppingCart;
}

This is a perfectly valid thing to do.

Consider this alternate approach.  What do you think is going to happen?

@RequestScoped
public class Item {
}
@SessionScoped
public class ShoppingCartController{
    @Inject
    private Item item;
}

In here, as mentioned above, Item is bound to a request, it represents the item that is being selected by the user.  I marked my controller as session scoped since the lifecycle of the shopping cart is tied to a session.  This is a legal injection and is threadsafe.  What happens is that the context is bound to the HTTP request, the same controller (session scoped) will operate on two different requests.  Obviously be careful for things like mutating the controller (since its a controller, shouldn't be an issue).

Services


Services, from my point of view have two valid scopes.  First, they can be application scoped (or singleton, if you don't care about proxies/AOP) since they maintain no state.  Second, they can be dependent since they also maintain no state, and can be reused in a wider variety of cases.  I'm generally warry of recommending dependent scoped beans, just because of some ways they can be misused (e.g. not cleaning up, lookup/non-injection cases).  As long as a service isn't maintaining any state, it can be used over and over again.

Conclusion


There you have it.  I hope you found this article useful.  Please feel free to add comments below.

Sunday, March 23, 2014

Review of Java EE and HTML5 Enterprise Application Development

I recently had the pleasure of reading through the book Java EE and HTML5 Enterprise Application Development by John Brock, Arun Gupta and Geertjan Wielenga.

First, a comment on file formats.  I use Linux at home.  I had the most trouble trying to get setup to actually read the book.  The format used is only compatible with Adobe Digital Editions, which only works on Mac and Windows.  I ended up getting a VM to do the reading on.  Little bit of a pain if you're like me and use a tablet for a decent amount of reading.

The book takes the approach of wanting to build HTML5 applications, leveraging the Java server side as an API type server, with both REST APIs and WebSockets in use, and a front end based on Knockout and low level jQuery to process API calls.  The backend is using your stereotypical Java EE stack of JPA, EJB and JAX-RS, plus WebSocket support.  The way used in this book serves as an entry point for someone new to a lot of the technology and how to use them; it doesn't focus on changes over time in these specs or some of the new features.  It's a bottom up approach for being able to expose your database over an API.

Probably the most confusing part of the book, and it may be because of different authors, is the crossing of example types.  In the JPA and JAX-RS chapters, we use a book and author example.  In the WebSocket chapter the authors use a board and tic tac toe example.  I believe the consistent use of a fluid example is the best thing to do in a technical book.  A good example of that is Continuous Enterprise Development by Andrew Lee Rubinger and Aslak Knutsen.

The chapter on application security is probably the best of the book, in my opinion.  It goes through what you need to do not just server side but also client side to secure your web applications.  For those new to this programming paradigm, it's some good information on some of the key differences versus traditional server side rendered web applications (JSF, Struts, etc).

The content of the book is brought about in an introductory manner.  If you're new to these technologies, it's a good read to get up to speed on how they work.  The JPA spec has only changed a little bit in 2.0 and 2.1, so if you're already familiar with how things worked previously, it's not a huge change.  JAX-RS is a newer spec, in its 2.0 release already and shows how declarative it can be.  Hidden in the REST chapter you'll find some interesting pieces on CDI, Transactions and Bean Validation.  These other technologies really help build the bridge across all of the technology.

Wednesday, February 26, 2014

Announcing Hammock

I'd like to introduce the world to Hammock

Hammock is based on my last blog post, creating a light weight service to run JAX-RS resources over a minimalistic configuration.

Binaries are currently up on Sonatype OSS (I hope they sync to MVN central shortly): https://oss.sonatype.org/index.html#nexus-search;quick~ws.ament.hammock

Github: https://github.com/johnament/hammock

What is Hammock?

Hammock is a light weight integration between JBoss RestEasy, Undertow and Weld.  Leveraging Weld SE, it provides automatic resource scanning and minimal binding code to launch a web container (Undertow) running the basic services to launch a full JAX-RS application.

Getting Started

Getting started with Hammock is simple.  Add a reference to the project in your pom.xml:


  ws.ament.hammock
  hammock-core
  0.0.1


Add your REST Resource class:

@Path("/echo")
@RequestScoped
public class EchoResource {
    @GET
    @Produces("text/plain")
    public String greet() {
        return "hello";
    }
}

Implement the configuration, for application (via @ApplicationConfig)

@ApplicationConfig
@ApplicationScoped
public class ApplicationConfigBean implements WebServerConfiguration {
    @Override
    public int getPort() {
        return 8080;
    }
    @Override
    public String getContextRoot() {
        return "/api";
    }
    @Override
    public Collection getProviderClasses() {
        return Collections.EMPTY_LIST;
    }
    @Override
    public Collection getResourceClasses() {
        return Collections.singleton(EchoResource.class);
    }
    @Override
    public String getBindAddress() {
        return "0.0.0.0";
    }
}

You can optionally also do this for a management interface as well (via @ManagementConfig).  The resources tied to each of these configurations would then be launched when you start your application.

Starting your app can be done manually via Weld SE, or by using their built in class, org.jboss.weld.environment.se.StartMain .