Sunday, March 23, 2014

Review of Java EE and HTML5 Enterprise Application Development

I recently had the pleasure of reading through the book Java EE and HTML5 Enterprise Application Development by John Brock, Arun Gupta and Geertjan Wielenga.

First, a comment on file formats.  I use Linux at home.  I had the most trouble trying to get setup to actually read the book.  The format used is only compatible with Adobe Digital Editions, which only works on Mac and Windows.  I ended up getting a VM to do the reading on.  Little bit of a pain if you're like me and use a tablet for a decent amount of reading.

The book takes the approach of wanting to build HTML5 applications, leveraging the Java server side as an API type server, with both REST APIs and WebSockets in use, and a front end based on Knockout and low level jQuery to process API calls.  The backend is using your stereotypical Java EE stack of JPA, EJB and JAX-RS, plus WebSocket support.  The way used in this book serves as an entry point for someone new to a lot of the technology and how to use them; it doesn't focus on changes over time in these specs or some of the new features.  It's a bottom up approach for being able to expose your database over an API.

Probably the most confusing part of the book, and it may be because of different authors, is the crossing of example types.  In the JPA and JAX-RS chapters, we use a book and author example.  In the WebSocket chapter the authors use a board and tic tac toe example.  I believe the consistent use of a fluid example is the best thing to do in a technical book.  A good example of that is Continuous Enterprise Development by Andrew Lee Rubinger and Aslak Knutsen.

The chapter on application security is probably the best of the book, in my opinion.  It goes through what you need to do not just server side but also client side to secure your web applications.  For those new to this programming paradigm, it's some good information on some of the key differences versus traditional server side rendered web applications (JSF, Struts, etc).

The content of the book is brought about in an introductory manner.  If you're new to these technologies, it's a good read to get up to speed on how they work.  The JPA spec has only changed a little bit in 2.0 and 2.1, so if you're already familiar with how things worked previously, it's not a huge change.  JAX-RS is a newer spec, in its 2.0 release already and shows how declarative it can be.  Hidden in the REST chapter you'll find some interesting pieces on CDI, Transactions and Bean Validation.  These other technologies really help build the bridge across all of the technology.

Wednesday, February 26, 2014

Announcing Hammock

I'd like to introduce the world to Hammock

Hammock is based on my last blog post, creating a light weight service to run JAX-RS resources over a minimalistic configuration.

Binaries are currently up on Sonatype OSS (I hope they sync to MVN central shortly): https://oss.sonatype.org/index.html#nexus-search;quick~ws.ament.hammock

Github: https://github.com/johnament/hammock

What is Hammock?

Hammock is a light weight integration between JBoss RestEasy, Undertow and Weld.  Leveraging Weld SE, it provides automatic resource scanning and minimal binding code to launch a web container (Undertow) running the basic services to launch a full JAX-RS application.

Getting Started

Getting started with Hammock is simple.  Add a reference to the project in your pom.xml:


  ws.ament.hammock
  hammock-core
  0.0.1


Add your REST Resource class:

@Path("/echo")
@RequestScoped
public class EchoResource {
    @GET
    @Produces("text/plain")
    public String greet() {
        return "hello";
    }
}

Implement the configuration, for application (via @ApplicationConfig)

@ApplicationConfig
@ApplicationScoped
public class ApplicationConfigBean implements WebServerConfiguration {
    @Override
    public int getPort() {
        return 8080;
    }
    @Override
    public String getContextRoot() {
        return "/api";
    }
    @Override
    public Collection getProviderClasses() {
        return Collections.EMPTY_LIST;
    }
    @Override
    public Collection getResourceClasses() {
        return Collections.singleton(EchoResource.class);
    }
    @Override
    public String getBindAddress() {
        return "0.0.0.0";
    }
}

You can optionally also do this for a management interface as well (via @ManagementConfig).  The resources tied to each of these configurations would then be launched when you start your application.

Starting your app can be done manually via Weld SE, or by using their built in class, org.jboss.weld.environment.se.StartMain .

Sunday, January 19, 2014

Bridging Netty, RestEasy and Weld

As you likely know, RestEasy already supports an embedded container for Netty.  RestEasy also supports CDI injection, but only for enterprise use cases (e.g. part of the Java EE spec or using Weld Servlet).  In the case of Netty, it's almost possible, except that the lack of a ServletContext seems to throw it off.

In addition, in many use cases you may want to translate each incoming request into a CDI RequestScope.  This requires some custom handling of each request, before passing it down to RestEasy for processing.  This allows you to properly scope all of your objects, though you cannot use a session scoped object (since there would be no active session).

The code is pretty simple to do this.  You can find details on my github repository: https://github.com/johnament/resteasy-netty-cdi

First, define your endpoint.  In my test case, I added a very simple one:

@Path("/")
@RequestScoped
public class TestEndpoint {
    @GET
    public String echo() {
        return "pong";
    }
}

Next, we need some code to initialize the server.  I added this directly in my test, but I would imagine most people would want to initialize it elsewhere.

CDINettyJaxrsServer netty = new CDINettyJaxrsServer();
        ResteasyDeployment rd = new ResteasyDeployment();
        rd.setActualResourceClasses(paths.getResources());
        rd.setInjectorFactoryClass(CdiInjectorFactory.class.getName());
        netty.setDeployment(rd);
        netty.setPort(8087);
        netty.setRootResourcePath("");
        netty.setSecurityDomain(null);
        netty.start();

As you can see in the test, I am using a custom CdiNettyJaxrsServer, which is what enables me for CDI integration.  The only thing different about mine versus the normal one is what RequestDispatcher I use.  The RequestDispatcher is what RestEasy provides to handle the incoming requests and what the response looks like.  It's very low level.  I decided this was the exact point I wanted to start the CDI RequestScope.  So my RequestDispatcher looks like this

public class CDIRequestDispatcher extends RequestDispatcher
{
    public CDIRequestDispatcher(SynchronousDispatcher dispatcher, ResteasyProviderFactory providerFactory,
                                SecurityDomain domain) {
        super(dispatcher,providerFactory,domain);
    }
    public void service(HttpRequest request, HttpResponse response, boolean handleNotFound) throws IOException
    {
        BoundRequestContext requestContext = CDI.current().select(BoundRequestContext.class).get();
        Map<String,Object> requestMap = new HashMap<String,Object>();
        requestContext.associate(requestMap);
        requestContext.activate();
        try {
            super.service(request,response,handleNotFound);
        }
        finally {
            requestContext.invalidate();
            requestContext.deactivate();
            requestContext.dissociate(requestMap);
        }
    }
}

So whenever a request comes in, I start the context (using Weld's BoundRequestContext) and on completion I end it.  I also created a custom CdiInjectorFactory for Netty.  This alleviates a bug in the base one that depends on a ServletContext being available (throughs a NullPointerException).  It's just a simplified version of the injector factory

    protected BeanManager lookupBeanManager()
    {
        BeanManager beanManager = null;
        beanManager = lookupBeanManagerCDIUtil();
        if(beanManager != null)
        {
            log.debug("Found BeanManager via CDI Util");
            return beanManager;
        }
        throw new RuntimeException("Unable to lookup BeanManager.");
    }

You'll also notice in my test code I'm using a CDI Extension - LoadPathsExtension.  This simply sits on the classpath and listens as Weld initializes.

LoadPathsExtension paths = CDI.current().select(LoadPathsExtension.class).get();

For each ProcessAnnotatedType it observes, it checks if Path is present.  If Path is present, it adds it to a local list of all resources.

public void checkForPath(@Observes ProcessAnnotatedType pat) {
        if(pat.getAnnotatedType().isAnnotationPresent(Path.class)) {
            logger.info("Discovered resource "+pat.getAnnotatedType().getJavaClass());
            resources.add(pat.getAnnotatedType().getJavaClass());
        }
    }

This makes scanning for Paths possible, which is done by the container for RestEasy.  In the Netty deployment, you need to always maintain your list of resources.

LoadPathsExtension paths = CDI.current().select(LoadPathsExtension.class).get();
        CDINettyJaxrsServer netty = new CDINettyJaxrsServer();
        ResteasyDeployment rd = new ResteasyDeployment();
        rd.setActualResourceClasses(paths.getResources());

Finally, we start the actual test which uses the JAX-RS client API to make a request to a specific resource.

        Client c = ClientBuilder.newClient();
        String result = c.target("http://localhost:8087").path("/").request("text/plain").accept("text/plain").get(String.class);
        Assert.assertEquals("pong", result);

Saturday, October 26, 2013

Announcing injectahvent - a lightweight integration between CDI Events & Apache Camel exchanges

Howdy all!


I am proud to announce a new open source library I am working on, injectahvent.  Injectahvent is meant to be a light weight integration between Camel Messages and CDI Events.  It allows you to define a processor that will fire CDI events on Camel exchanges, and in turn will also allow you to register new ObserverMethods in CDI that listen for events and move them over to Apache Camel exchanges.

So what does it do so far?


Well, it's a little generic right now, but essentially you can register a new CDIEventProcessor in to your RouteBuilder and let CDI fire an Event with the body and exchange objects.

The second part it does is is to allow an application developer to create a simple CDI extension that registers new ObserverMethods for CDI events to look for and use a Camel ProducerTemplate to fire that message.

Conceptually, this is a carry on project from the JBoss SeamJMS module.  While the new fluent API for sending messages was inspired by some of the builder pattern used in Seam JMS and is now incorporated in JMS 2.0, the firing of events was not carried over.  Firing the events is more of an EIP/EAI type of tool.  As a result, I decided to leverage Apache Camel to make it more generic.

The library leverages Apache DeltaSpike, with its CDI Container Control API to start a new RequestContext for a fired message.  It's currently assumed that the basic bootstrap of a CDI container is done outside of this context.

Source Code: https://github.com/johnament/injectahvent
Issues: https://github.com/johnament/injectahvent/issues

I hope to start putting together a wiki for it soon.  As well as doing some nice code clean up.

Sunday, September 29, 2013

Testing JAAS based authentication using Arquillian + Apache DeltaSpike Servlet Module

Hey all.  Long time no post.

In today's discussion, I'm talking about testing enterprise applications again.  I recently was faced with a dilemma.  I needed to test my application in an authenticated way, where the user was logged in vs logged out to ensure that things were going right with an authenticated principal.

Of course, this assumes a few things.  First, that we've executed something to ensure our database is populated with a user that could log in.  In my case, I did this by calling internal APIs that do registration and registration completion.  You could do other things, such as using the Arquillian Persistence Extension to insert records.  My application is using the @InSequence annotation with Arquillian and JUnit to give methods an ordering.

Now that we have a little bit of setup, we're going to leverage a few newer technologies.  In Servlet 3, methods were added to the HTTPServletRequest interface that can login and logout.  These give us an easy interface to be able to authenticate an end user in to the configured JAAS realm.  The problem though is how can we access these objects from an Arquillian test?

Of course, this approach is assuming you're using the Servlet 3 protocol (or better) to execute your tests.  This ensures that you have a an active HTTP request on every test.  This request is what we'll use to login the user to the application, and logout afterwards.  You can follow the instructions here to setup the Servlet module in your application.  You'll likely also want to use the Maven resolver from ShrinkWrap to bring in the Servlet Module to your deployment WAR file.  That resolver config is as easy as:

File[] impl = Maven.resolver().resolve("org.apache.deltaspike.modules:deltaspike-servlet-module-impl").withTransitivity().asFile();

which can then be added to your WAR as library JARs.

Now, in your test case you just need to inject the request.  Then in your Before/After methods login and logout.

@Inject
@Web
private HttpServletRequest request;

...

@Before
public void loginBeforeTest() throws ServletException { // throws simply in case there is a configuration issue, bad account info, already logged in etc.
    request.login(username,password);
}


...

@After
public void logoutWhenDone() throws ServletException {
    request.logout();
}

Now, when your test executes, any injection point for Principal will be filled in with the account that was logged in.  This will correctly propogate to any security context as well, EJBContext, MessageDrivenContext, etc.  This of course assumes that your username and password are instance variables within the class, or static references that can be found (I have found that the static reference approach works best for handling client and server tests in arquillian).

Tuesday, July 16, 2013

Just an idea - programmatic registration of MDBs

Just an idea - programmatic registration of MDBs

So, I'm throwing an idea out there, soliciting feedback.  Why can't MDBs be programmatically registered in the container?

My idea is that an MDB can be a CDI object (rather than an EJB object),

I have a simple GitHub project around this idea here, if you want to check it out (still need to maven-ize it): https://github.com/johnament/jms-mdb-config

The first thing to point out is that MDBs are the main way to support message processing in JMS in a Java EE application server.  It's not possible to setup a message listener due to the container restriction around unmanaged threads.

The idea is to generally provide MDB support programmatically.  This is something very useful from both a framework standpoint and an application developer standpoint.  This allows cleaner support for lambda expressions, as well as give a way to dynamically bind MDBs based on runtime settings.  It allows you to on the fly change your bindings (of course, this is assuming that support can be added for it).

In the late days of Java EE 7, a change to MDBs was introduced to allow an MDB to be bound at a method level of an object.   While this is a great feature, it still doesn't tackle what I think most people need from an MDB - dynamic support.  I'm using a PaaS example in my setup - where we need to dynamically create a queue per tenant to do work (to physically separate their data) and then register that queue when ready to a new MDB instance.

There are a few things this doesn't seem to support as well.  The first is pooling.  No way to create many instances in the one case and the other case uses CDI rules for instantiation.  I think both are OK, since we're not talking about super heavyweight objects.  Though we may run into an issue if too many messages come on the thread.  Realistically, I haven't handled an implementation that wasn't singleton in nature, so I think this isn't a huge loss.  The second is that this doesn't give an option of doing the configuration from annotations.  I think this could be solved by introducing annotations @JMSConfigurationProvider and @JMSListenerMethod, so that the container scanned these automatically on start up.

Of course, these are all JMS specific annotations and interfaces.  I would imagine a revised MDB spec like this would indicate that a configuration object may be polled and return a typed configuration based on what it's handling, so JMSConfiguration would be that for JMS (and if we did support other types of MDBs, they would each have their own configuration).

Another thing that this helps with is in the SE world.  We now have a standard interface to program against for binding of listeners.  While this would be an MDB in SE, it could apply to a remote JMS connection.  This could remove the need for binding MessageListeners.

Saturday, June 15, 2013

What's new in JMS 2 - Part 2 - An Event based Message Sender

One of the pipe dreams we had in Seam3 was to be able to create a variable JMS event router that could send and receive messages as if they were simple CDI events. Why did we want this? For one, it was rather difficult to build a JMS sender. lots of objects and exception handling, not much to do with these errors. It was beneficial to support this as this gave an easy way to handle asynchronous event processing with CDI, leveraging JMS. It should be simple, but there were some stumbling blocks we ran across.

  1. CDI events used a global firing process, not a first match or best match algorithm. If you had an observer method that was as generic as:

    public void handleObj(@Observes Object obj) { ... }

    Then every event fired would make it to this method.

  2. Creating specific bindings breaks dynamic event processing.

    In CDI, there are two ways to fire an event. The first is to use the Event interface, the other is to use a BeanManager.

    We got close. With a fairly convoluted way we could support static binding of events to JMS messages. However, we had no way to support the additional qualifiers that may accompany a message. Let's say we had the following qualifiers on our event:

    @SendTo("someQueue") @From(customer=1) @ReplyTo("someOtherQueue")

    We could use the SendTo to map the message to a Destination, however the extra annotations were not able to come along for the ride. This is because CDI didn't give access to these qualifiers, so any observer method was blind to its input.

  3. In bound message processing breaks Java EE specification requirements. Technically, you cannot start a message listener in Java EE; you instead need to setup an MDB. You could create a message listener implementation and allow developers to extend it for custom processing, or have it mapped in ejb-jar.xml but it doesn't allow this solution to be turn key.




To help fix the problems introduced in 1 and 2, along came the EventMetadata object, which helps out our observer methods. This class acts like an InjectionPoint but is specific to event observers.

Issue 3 remains an issue. Thankfully, one of the late additions in Java EE 7 was support for a Resource Adapter that was more dynamic in nature. We'll have to see if something can be

Now, with CDI 1.1 and JMS 2 we can easily build this outbound message sender with just about 25 lines of code. Behold

 public class SendToObserver {  
   @Inject  
   private JMSContext jmsContext;  
   @Resource  
   private Context context;  
   public void sendViaJms(@Observes @SendTo("") Serializable obj, EventMetadata metadata) {  
     SendTo st = getSendTo(metadata);  
     try {  
       Destination d = (Destination) context.lookup(st.value());  
       jmsContext.createProducer().setProperty("qualifiers", metadata.getQualifiers()).send(d, obj);  
     } catch (NamingException e) {  
       // TODO log something here please  
     }  
   }  
   public SendTo getSendTo(EventMetadata metadata) {  
     Set<Annotation> qualifiers = metadata.getQualifiers();  
     for (Annotation a : qualifiers) {  
       if (a instanceof SendTo) {  
         return (SendTo) a;  
       }  
     }  
     return null;  
   }  
 }  


Assuming that we have a SendTo qualifier defined as

 @Qualifier  
 @Target({ TYPE, METHOD, PARAMETER, FIELD })  
 @Retention(RUNTIME)  
 @Documented  
 public @interface SendTo {  
      @Nonbinding String value();  
 }  


So what happens in this observer? Simple. First, we're observing any Serializable object. So an injection point for this observer could be

 @Inject  
 @SendTo("someQueue")  
 @Important  
 private Event<Serializable> serializableEvent;  


Now, in CDI 1.1 the qualifiers (@SendTo("someQueue") and @Important) will be forwarded and found in the annotation set, returned in getQualifiers(). We can iterate through them to find the SendTo to identify where to send the message to. We can also take all of these qualifiers and send them as an attribute of the message; making them available for use on the receiver side. getSendTo processes the values to read in this destination.

Next, we use JMS2's JMSContext to simplify sending. Behind the scenes, this is using a JMSProducer via a builder pattern to set properties on the message and then send it to the destination. We don't need to create a message, it happens within the sender for us. The injection here can be Request scoped or Transaction scoped, whichever is appropriate here will be activated.

What was easily 1000 lines + in Seam JMS is now just about 25 lines (30 lines if you count the qualifier). This is a huge amount of code improvement. And now, sending JMS messages is really as easy as firing a CDI event.