Saturday, April 12, 2014

Spring Boot on OpenShift

I have a Spring Boot application that I would like to deploy to OpenShift. Unfortunately the Spring Boot documentation is silent about OpenShift, although it does contain information about other cloud providers (of course Pivotal's Clound Foundry but also Heroku).

This is a how-to on how I deployed my Spring Boot application to OpenShift as a prebuilt WAR file. This entry is pieced together from various resources from the OpenShift and Spring Boot documentation.

Spring Boot Configuration


General:

In the application you need to have a class like the following (note the extends part and the configure method that you don't normally have in Spring Boot - the main method would suffice):

@Configuration
@EnableAutoConfiguration
@EnableMongoRepositories
@ComponentScan
public class Booter extends SpringBootServletInitializer {

    @Override
    protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
        return application.sources(Booter.class);
    }


    public static void main(String[] args) throws Exception {
        SpringApplication.run(Booter.class, args);
    }

}

You also need to add this to your pom.xml:
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-tomcat</artifactId>
    <scope>provided</scope>
</dependency>

For local development I use the console to fire up the application with the maven command:
mvn spring-boot:run

Read more about the pom.xml changes and the SpringBootServletInitializer here and here.

Actuator:

I use the actuator plugin in Spring Boot. However, on OpenShift you must rename the /health endpoint (read on to know why). The easiest way to do this is simply to change all actuator endpoints using an application property with a line like this (make an application.properties file and put it in the src/main/resources folder in the application):

management.context-path=/manage

That will map for instance /health to /manage/health instead.

Mongo:

To use Mongo you need to use the url, port, user and password given by OpenShift. I use the following bean definition to shift between OpenShift and my test system.

@Bean
public MongoTemplate mongoTemplate() throws Exception {

    if (System.getenv("OPENSHIFT_MONGODB_DB_HOST") != null) {

        LOG.info("Connecting to OpenShift Mongo");

        String openshiftMongoDbHost = System.getenv("OPENSHIFT_MONGODB_DB_HOST");
        int openshiftMongoDbPort = Integer.parseInt(System.getenv("OPENSHIFT_MONGODB_DB_PORT"));
        String username = System.getenv("OPENSHIFT_MONGODB_DB_USERNAME");
        String password = System.getenv("OPENSHIFT_MONGODB_DB_PASSWORD");
        Mongo mongo = new MongoClient(openshiftMongoDbHost, openshiftMongoDbPort);
        UserCredentials userCredentials = new UserCredentials(username, password);
        String databaseName = System.getenv("OPENSHIFT_APP_NAME");
        MongoDbFactory mongoDbFactory = new SimpleMongoDbFactory(mongo, databaseName, userCredentials);
        MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory);
        return mongoTemplate;
        
    } else {

        LOG.info("Connecting to test Mongo");

        return new MongoTemplate(new SimpleMongoDbFactory(new MongoClient(), "test"));
    }
}


OpenShift


Application type:

I use the Tomcat 7 (JBoss EWS 2.0) cartridge with scaling plus the MongoDB cartridge. I started by checking out the code using git clone, removed the src folder and the pom.xml file (as I am deploying WAR style).

The actual deployment is pretty simple. After building with mvn package I copy the generated WAR file to the webapps folder, rename it to ROOT.war as I want it to be mapped to my-app.rhcloud.com (and not my-app.rhcloud.com/some-other-thing). Then I git add, commit and push. That is also described here.

HAProxy (scaling):

If you use scaling in you application you need to know about HAProxy which monitors you application by calling the / path unless you do something. HAProxy expects a 200 OK HTTP answer back. My app returns 404 when accessing /, hence HAProxy thinks the application is down. 503 is returned for everything except for calls to /health, but more on that later.

To fix it, ssh into your app and go to the haproxy folder. Then edit the file conf/haproxy.conf, specifically you need to modify the line shown below (almost at the bottom of the file) to whatever path you want HAProxy to monitor:

option httpchk GET /

You should probably choose the path carefully and not a path that requires a lot of server power to process as HAProxy polls the path rather often to check the application state. Afterwards do this in the console:

bin/control restart

to make the change have effect. HAProxy is described here.

/health:

/health has a special meaning on OpenShift, luckily you have already mapped the /health from Spring Boot actuator to /manage/health.

The OpenShift /health mapping is described here.


That's all folks.

Friday, April 11, 2014

NetBeans NetCAT 8.0


I think that some people thought I was harsh in my entry on NetBeans vs. IntellJ IDEA. As I wrote I actually did a lot work in getting to know the IDEs, especially NetBeans where I joined the NetCAT.

Yesterday I got a mail from NetBeans notifying me that some of the bugs I filed were indeed solved in the 8.0 final release. I especially like that 241120 was fixed as it was quite annoying in refactoring. Good job.

Dear NetBeans User,
In the past you have taken the time to report issues that you encountered while using NetBeans software. A new version (NetBeans 8.0) has just been released,and we'd like to inform you that the following issue(s) you reported have been addressed in the new release:
239915Unpacking index is extremely slow
240039Double "Unit tests" folders in New file dialog
240189[regression] broken "Go To Source" in Find Usages tab
240639NullPointerException at org.netbeans.modules.web.jsf.editor.actions.NamespaceProcessor.computeImportData
240696NullPointerException at org.netbeans.modules.web.debug.EngineContextProviderImpl.getDefaultContext
240699XML gets formatted as HTML in Variables view
241120Too many return values.
241723java.io.FileNotFoundException: http://docs.angularjs.org/api/guide/ie
Please visit the netbeans.org website to download NetBeans 8.0 and to learn more about the new release.
We appreciate your contribution to our efforts to make NetBeans software and features better for all users. And as always, we look forward to your feedback on how we can continue to improve NetBeans.Thank you.
The NetBeans Team

Looking for a Java Profiler. No IntelliJ IDEA profiler? What to do?

I have previously written about NetBeans 8 vs. IntelliJ IDEA 13. One thing I am missing in IntelliJ is a profiler. The advantages in having a profiler integrated into the IDE compared to having the profiler running on the side are multiple: the profiler is integrated into the IDE's start application functionality and so is the ability to go directly to the (slow) profiled code to name a few.

However, IDEA does not ship with a profiler. I know that it integrates very nicely with JProfiler but that is a rather expensive tool. This led me to look for other options. I have previously used VisualVM but Mission Control caught my eye as it has a cool feature called Flight Recorder (a low overhead profiler that you can use in production). It requires you have a recent JVM that bundles Mission Control (a recent Java 7 or 8 will do) + for running your app with the extra options that I'll show below.

Run options


From within IDEA: 

To address the first problem, profiling a unit test or app from with IDEA, here is what I do. When you start you app or unit test from with in IDEA add this to the run configuration's VM options:

-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=duration=120m,filename=recording.jfr

It starts the flight recorder immediately and runs it for up to 120 minutes. It puts the results in a file called recording.jfr in the root project folder. When done or after 120 minutes you can then open the file using Mission Control and view for instance the Code tab's Hot Methods tab.

Of course you can also just connect Mission Control to the running process that is execution you code directly. You do this by having Mission Control open and select the the Flight Recorder item in the JVM Browser of the process. But it is nice to know that you can make it start along side the actual application start using the above configuration.

Options intermezzo:

When you want to do more experimentation you can add an extra settings argument to the XX:StartFlightRecording (see Maven example below). The argument must equal a file name in the JRE_HOME/lib/jfr folder. The files contain information on for instance sampling rate, what will be recorded (for instance what JVM and OS metrics) etc. You can edit it in Mission Control in the Flight Recording Template Manager.

From the console using Maven:

If you prefer the console, here are some nifty Maven commands:

To run tests that match Profile* and profile them (note that if you use Surefire and already have an argLine in the pom.xml file, then you can't override it from the console):

mvn -Dtest=Profile* test -DargLine="-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=filename=result.jfr,duration=120m,settings=MyWicketSettings.jfc"

Run app with profiling enabled (here a Spring boot app):

MAVEN_OPTS="-XX:+UnlockCommercialFeatures -XX:+FlightRecorder" mvn spring-boot:run

With that command you have enabled Flight Recorder. You can then open Mission Control when you want to look at the apps performance etc.

Sunday, February 23, 2014

IntelliJ IDEA vs NetBeans IDE

Since November 2013 I have tried and tested IntelliJ IDEA Ultimate (version 13) and NetBeans (version 8.0 beta and some builds pre and post beta. Please keep this in mind when you read the following as things may have improved in the final release).

I chose IDEA because many of my friends tell me it is the best. NetBeans because that was the first IDE I used (way back around when Eclipse 2.0 was king) and the one I used almost exclusively until late 2007 where I started using Eclipse or rather SAP NetWeaver Developer Studio. Hence there is some nostalgia in choosing NetBeans but also the fact that is a free polyglot IDE weighs in.

I have used the two IDEs on a Maven based project containing around 4000 Java files and 500 jsp and a lot of JavaScript. Frameworks used are Spring, Hibernate and various other. To be clear from the outset I have been more involved in NetBeans as I volunteered to be part of the NetCat for NetBeans 8.0. I thought that was a good way to learn the IDE. To be fair against IntelliJ IDEA it should be stated that what I focus on in this blog entry is not tied to the Ultimate edition.

There are obviously many parts of both IDEs that I have not used. However what I have used is the basic stuff that any decent Java developer will use.

Here are my findings:

 

General findings:

IntelliJ has ctrl-w and Netbeans has alt-shift-period. Both commands select the syntactical element the caret is placed at and pressing w or period multiple times will expand this selection (you can also decrease the scope using other commands). When editing Java files the two are equally good but when editing JSP/html and JavaScript files IntelliJ is far superior. I have added an enhancement request to NetBeans.

NetBeans code refactoring tools are not the best. For instance introducing a new variable does not  give you an easy way to change the type  of the new variable (in IntelliJ this works). I have filed an enhancement request for this in NetBeans. I also experienced another problem with NetBeans not being able to extract a method but in fairness to the NetBeans team it should be stated that they promptly fixed it.

I have also experienced some strange problems in NetBeans, for instance that running a Spring Boot project will not shut down the Tomcat process that fires up when stopping the project from running. This means that the Tomcat port will be taken when you rerun the project (this is Mac specific). Doing the equivalent in IntelliJ or Eclipse for that matter takes down the Tomcat process (also on a Mac). I also experience exceptions that the NetBeans team say is caused by the JDK (1.8 which they recommend) but that they won't/can't fix or work around. Maybe Mac and NetBeans are not a match made in heaven because have also reported another bug that is related to Mac and that can't be solved.

Slowness issues:

Running unit tests in Netbeans is very slow compared to IntelliJ (my workflow is the usual TDD flow where you make some tests, make code change, run test and so on). It seems to me that Netbeans is noticeably rebuilding everything via Maven to run a single test, whereas IntelliJ must have a better model or does something in the background as you code as it can run the tests immediately (turning compile on save on or off in NetBeans does not change this).  I have filed an enhancement request to NetBeans.

Alt-F7 in NetBeans and IntelliJ means the same thing: finding usages - something I do a lot. However, finding usages in the two IDEs is very different. In NetBeans finding usages takes some time (you acutally have to wait while the IDE searches) whereas the usages pop up immediately in IntelliJ. This must be due to some superior indexing in IntelliJ. Also simply finding files or types in the two IDEs show that IntelliJ has better indexing and it gives you instantaneous results.

Debugging in NetBeans and IntelliJ also leaves you with two very different user experiences. NetBeans is annoyingly slow at showing the variable values.  I often have to sit and wait for long enough to notice that I am waiting.

When you import a Maven project into NetBeans, it will try to update its Maven index (at least on the first run, but I experienced it often, also simply when adding new dependencies in the pom files). This is a very painful experience as it takes a lot of time to process the index files that it downloads (+10 minutes where the IDE is practically unusable - but I least I get to hear my fans fire up).

NetBeans IDE has a slowness detector which will report an exception if an operation takes more than 5 seconds. I think 5 seconds is a long time to wait and that should probably be down to 2 seconds for IDE operations (of course if you ask it to do a Maven build, you can't blame the IDE for Maven being slow).

Final words: 

To be honest I started out the investigating hoping that NetBeans would be better. But IntelliJ is simply a better IDE. But don't take my word for it, try it yourself. That said, is was fun to be part of the NetBeans community, finding and filing bugs (of which I have filed +40 in the last two months) and seeing them get fixed so I will probably give NetBeans a try again later which might be when I have to pay for a personal license for IntelliJ :)

Edits:

I have fixed some cut and paste errors and typos after publishing this entry.

Tuesday, June 25, 2013

Spring singletons - a potential source of error

When you create a Spring bean and do nothing special it will be a singleton as described in the documentation. Such a declaration might look like this:

package org.saabye_pedersen.kim;

@Component
public class ExampleBean {
    
    public String getMessage() {
        return "Hello world!";
    }

}
With a XML config like this:



 


If we run this code:
public class Main {

    public static void main(String[] args) {
        ApplicationContext ctx = new ClassPathXmlApplicationContext("file:src/main/resources/META-INF/spring/app-context.xml");
        ExampleBean bean = ctx.getBean(ExampleBean.class);
        String message = bean.getMessage();
        System.err.println(message);


        ExampleBean beanAgain = ctx.getBean(ExampleBean.class);
        message = beanAgain.getMessage();
        System.err.println(message);
    }

}
the output will be:
Hello World!
Hello World!
So far so good. Everything is fine and dandy. But what happens if we change the class to this:
@Component
public class ExampleBean {

    private int aMember = 1;

    public String getMessage() {
        String tmp = "Hello world! " + aMember++;
        return tmp;
    }
}
What will main method of the Main class print this time? The answer is:
Hello world! 1
Hello world! 2
Did you expect this instead?:
Hello world! 1
Hello world! 1
The result is hopefully not surprising, yet I have seen much code with singletons containing code that manipulates instance members in a way that will fail. For instance this will only work in the first invocation:
@Component
public class AnotherExampleBean {

    private int sum = 0;

    public int calcSumOfProvidedInts(int[] intsToSum) {

        for (int i : intsToSum) {
            sum += i;
        }

        return sum;
    }
}

Sunday, May 5, 2013

Spring annotation on interface or class implementing the interface??

A returning question with regard to Spring is whether to annotate interfaces or concrete classes with for instance the @Transactional annotation (and similar annotations, e.g. @Cacheable).

In the Spring documentation it states the following:

Spring recommends that you only annotate concrete classes (and methods of concrete classes) with the @Transactional annotation, as opposed to annotating interfaces. You certainly can place the @Transactional annotation on an interface (or an interface method), but this works only as you would expect it to if you are using interface-based proxies. The fact that Java annotations are not inherited from interfaces means that if you are using class-based proxies (proxy-target-class="true") or the weaving-based aspect (mode="aspectj"), then the transaction settings are not recognized by the proxying and weaving infrastructure, and the object will not be wrapped in a transactional proxy, which would be decidedly bad.

But what does that really mean? Let me illustrate with a simple example:
public class Main {

    public static void main(String... args) {

        ApplicationContext ctx = new ClassPathXmlApplicationContext("file:src/main/resources/META-INF/spring/app-context.xml");
        WithAnnotation bean = ctx.getBean(WithAnnotation.class);
        bean.someMethod();
    }
}


// I 'll play around with this
// @Transactional
interface WithAnnotation {
    void someMethod();
}

// I 'll play around with this
// @Transactional
@Component
class ConcreteClass implements WithAnnotation {

    public void someMethod() {
        System.err.println("In some method: " + TransactionInterceptor.currentTransactionStatus());
    };
}

Default configuration: proxy-target-class="false"

With a transaction configuration like this (the default):







If we run the program with the @Transactional uncommented on the interface we get:
...
In some method: org.springframework.transaction.support.DefaultTransactionStatus@5c76911d1
...

If we run the program with the @Transactional uncommented on the class (commenting out @Transactional on the interface again) we get:
...
In some method: org.springframework.transaction.support.DefaultTransactionStatus@5c76911d
...

In both cases the code executes in a transactional context as we would expect.

Non default configuration: proxy-target-class="true"

Now let's change the transaction configuration to this (the non default):


If we run the program with the @Transactional uncommented on the interface (commenting out @Transactional on the class again) we get:
...
Exception in thread "main" org.springframework.transaction.NoTransactionException: No transaction aspect-managed TransactionStatus in scope
 at org.springframework.transaction.interceptor.TransactionAspectSupport.currentTransactionStatus(TransactionAspectSupport.java:110)
...

If we run the program with the @Transactional uncommented on the class (commenting out @Transactional on the interface again) we get:
...
In some method: org.springframework.transaction.support.DefaultTransactionStatus@4869a267
...

What have we learned?

If Spring can't create a JDK proxy (or we force it to create a cglib proxy by setting the proxy-target-class to "true") then the code will not execute in a transactional context (just like the Spring documentation stated).

A warning: if you remove the call to TransactionInterceptor.currentTransactionStatus() the code will run without trowing an exception, but it will not run in a transactional context. Yes, you read it: no exception, no warning and no transactional context even though you have got the @Transactional annotation on the interface. I guess that is what is meant by ...the object will not be wrapped in a transactional proxy, which would be decidedly bad :)

Saturday, April 20, 2013

Java 8 lambda walkthrough

For work I have made a presentation about Java 8 project lambda and of course also some simple code illustrating some of the points. The overall reasons for Java 8 are:
  • More concise code (for classes that have just one method & collections). “We want the reader of the code to have to wade through as little syntax as possible before arriving at the "meat" of the lambda expression.” – Brian Goetz (http://cr.openjdk.java.net/~briangoetz/lambda/lambda-state-4.html)
  • Ability to pass around functionality, not just data
  • Better support for multi core processing
All examples are runnable on the following version of Java 8 downloaded from here:
openjdk version "1.8.0-ea"
OpenJDK Runtime Environment (build 1.8.0-ea-lambda-nightly-h3876-20130403-b84-b00)
OpenJDK 64-Bit Server VM (build 25.0-b21, mixed mode)


The simplest case:

public class ThreadA {

    public static void main(String[] args) {

        new Thread(new Runnable() {

            @Override
            public void run() {
                System.err.println("Hello from anonymous class");
            }
        }).start();

    }

}

public class ThreadB {

    public static void main(String[] args) {
        new Thread(() -> {
            System.err.println("Hello from lambda");
        }).start();

    }

}
Note the syntax, informally as
()|x|(x,..,z) -> expr|stmt
The arrow is a new operator. And note the conciseness of the second piece of code compared to the more bulky first piece.

Collections:


First let me introduce an simple domain and some helpers
public class Something {


    private double amount;

    public Something(double amount) {
        this.amount = amount;
    }

    public double getAmount() {
        return amount;
    }

    public String toString() {
        return "Amount: " + amount;
    }
}

public class Helper {

    public static List<Something> someThings() {
        List<Something> things = new ArrayList<>();
        things.add(new Something(99.9));
        things.add(new Something(199.9));
        things.add(new Something(299.9));
        things.add(new Something(399.9));
        things.add(new Something(1199.9));
        return things;
    }

}

public interface Doer<T> {

    void doSomething(T t);

}


Lets do some filtering and sorting Java 7 style:

public class CollectionA {

    public static void main(String... args) {

        List<Something> things = Helper.someThings();

        System.err.println("Filter");
        List<Something> filtered = filter(things);
        System.err.println(filtered);

        System.err.println("Sum");
        double sum = sum(filtered);
        System.err.println(sum);

    }

    public static List<Something> filter(List<Something> things) {
        List<Something> filtered = new ArrayList<>();
        for (Something s : things) {
            if (s.getAmount() > 100.00) {
                if (s.getAmount() < 1000.00) {
                    filtered.add(s);
                }
            }
        }
        return filtered;
    }

    public static double sum(List<Something> things) {
        double d = 0.0;
        for (Something s : things) {
            d += s.getAmount();
        }
        return d;
    }


}


And now Java 8 style - streaming:

import java.util.stream.Collectors;

public class CollectionB {

    public static void main(String... args) {

        List<Something> things = Helper.someThings();

        System.err.println("Filter lambda");
        List<Something> filtered = things.stream().parallel().filter( t -> t.getAmount() > 100.00 && t.getAmount() < 1000.00).collect(Collectors.toList());
        System.err.println(filtered);

        System.err.println("Sum lambda");
        double sum = filtered.stream().mapToDouble(t -> t.getAmount()).sum();
        System.err.println(sum);

    }

}


The import java.util.function.* interfaces & method references

public class CollectionC {

    public static void main(String... args) {

        List<Something> things = Helper.someThings();

        System.err.println("Do something");
        doSomething(things, new Doer<Something>() {

            @Override
            public void doSomething(Something t) {
                System.err.println(t);
            }
        });
    }

    public static void doSomething(List<Something> things, Doer<Something> doer) {
        for (Something s : things) {
            doer.doSomething(s);
        }
    }

}


Replace our Doer interface with the standard Consumer interface (previously known as Block)

import java.util.function.Consumer;

public class CollectionD {

    public static void main(String... args) {

        List<Something> things = Helper.someThings();

        System.err.println("Do something functional interfaces");
        consumeSomething(things, new Consumer<Something>() {

            @Override
            public void accept(Something t) {
                System.err.println(t);
            }
        });

        System.err.println("Do something functional interfaces, using lambda");
        consumeSomething(things, (t) -> System.err.println(t));

        System.err.println("Do something functional interfaces, using lambda method reference (new operator ::) ");
        consumeSomething(things, System.err::println);

        System.err.println("Do something functional interfaces, using stream");
        things.stream().forEach(new Consumer<Something>() {

            @Override
            public void accept(Something t) {
                System.err.println(t);
            }
        });

        System.err.println("Do something functional interfaces, using stream and method reference");
        things.stream().forEach(System.err::println);
    }

    public static void doSomething(List<Something> things, Doer<Something> doer) {
        for (Something s : things) {
            doer.doSomething(s);
        }
    }

    public static void consumeSomething(List<Something> things, Consumer<Something> consumer) {
        for (Something s : things) {
            consumer.accept(s);
        }
    }

}

Map, reduce, lazy & optional

import java.util.List;
import java.util.NoSuchElementException;
import java.util.Optional;
import java.util.stream.Collectors;

public class Various {

    public static void main(String... args) {

        List<Something> things = Helper.someThings();

        //Map
        System.err.println(things.stream().map((Something t) -> t.getAmount()).collect(Collectors.toList()));

        //Reduce
        double d = things.stream().reduce(new Something(0.0), (Something t, Something u) -> new Something(t.getAmount() + u.getAmount())).getAmount();
        System.err.println(d);

        //Reduce again
        System.err.println(things.stream().reduce((Something t, Something u) -> new Something(t.getAmount() + u.getAmount())).get());

        //Map/reduce
        System.err.println(things.stream().map((Something t) -> t.getAmount()).reduce(0.0, (x, y) -> x + y));

        //Lazy
        Optional<Something> findFirst = things.stream().filter(t -> t.getAmount() > 1000).findFirst();
        System.err.println(findFirst.get());

        //Lazy no value
        Optional<Something> findFirstNotThere = things.stream().filter(t -> t.getAmount() > 2000).findFirst();
        try {
            System.err.println(findFirstNotThere.get());
        } catch (NoSuchElementException e) {
            System.err.println("Optional was not null, but its value was");
        }
        //Optional one step deeper
        things.stream().filter(t -> t.getAmount() > 1000).findFirst().ifPresent(t -> System.err.println("Here I am"));

    }

}