Wednesday, April 20, 2016

Building a new productive team around Java EE 7

Anton Smutný is a software engineering manager at Muehlbauer Group, an international industrial company specializing in wide array of technologies. At the technology center located in Nitra, Slovakia, they are building a new agile Java team to fulfil growing internal needs for innovation and automation. Their team approached me to guide them in adopting new features and best practices in Java EE 7 effectively.

Anton, what were the reasons for choosing Java and Java EE, which alternatives did you consider?

This was not a question of creativity or technology. We knew that we have to build up the thin web based applications. We also wanted to avoid the necessity to build native mobile apps besides usual desktop applications. Why? It was a luxury for us. When I started in Muehlbauer Company in Nitra almost 2 years ago, the software department was a group of talented people who were insourced in the parent company. And they’ve stayed insourced till now. So it was necessary to build up the local team on the green field. And, of course, our goal was to produce the output as soon as possible. Therefore the web based application was a solution which was presentable on PC and also as mobile responsive application. Java EE offers us to implement it in a quite short time focusing on middleware and backend. We also realize that if we want to build up a new Java EE team, we will have to start cooperation with young talented students from the local universities. And the last but not least reason – Java EE was already used in the concern, .NET was, however, a mainstream technology.

Did you have any programmers in your team, who were just beginners with Java EE? How long did it take them to start with Java EE and be productive?

The beginnings were a little bit hard. We had an experienced developer in the team, but we built our team mostly on newbies. We had to find the way how to build up the knowledge-base. The main point was to identify who will focus on what. I think that the easiest way is to let people do what they like. Some of them prefer front-end, some middleware or backend, some are focusing on data. Some are more or less creative concerning architecture and third party integration. This was the key. But we saw the first meaningful results approximately after a year. On the other side, I have to admit that a newbie is for us a person who has some knowledge with Java SE. Then we can improve his or her Java EE knowledge by internal trainings, training applications or external trainer. 

What are the technological challenges you are facing? In which challenges is Java EE helpful to your team?

I do not want to talk about the projects in progress, but I can say that the biggest issue is the integration of the third party products. We are dealing with lack of information, documentation of interfaces or supplier’s willingness for cooperation. For example, to avoid these issues, we had to implement not only software but also hardware solutions. This was for me really great experience because I have some software background but I treat hardware as some kind of magic. What I also appreciate we are using Primefaces, so we are able to implement quite quickly a responsive mobile-friendly front-end. And I have to add that the most of our team prefers middleware or backend development.

How do you manage integration of Java EE and other platforms/connectors you use?

I’ve already mentioned integration of the third party platforms. I have to say that we tried to integrate some C++, respectively .NET products using the JNI. We do it only in the case that we really must or when the interface is not so complex. Of course, if we are talking about integration of our products, we use a SOAP web services, JMS, AMQ. We still do not use any black boxed integration platforms like Tibco, Oracle’s ESB/OSB or any other big integration solution.

Which process methodology do you use to plan and manage software projects?

When the department had started to grow, organization became harder and harder. That’s why we had to move to proper planning based on some certified technique. SCRUM is helping us to move our project forward and reach the quality and deadlines. All our task management is handled in Redmine.

Which tools or decisions have been most helpful to improve cooperation within your team or quality of the products?

We want to avoid chaos in our work. So we started with continuous integration. We use GIT as a source control, Gerrit for code review, Jenkins as automation server and SonarQube for managing code quality. Talking about the team, one of the most important things was a team composition, integration of the students because IT labor market here in Nitra is minor comparing to Bratislava. We started a communication with local universities and searching for opportunities to meet with most talented students. This is the way how we can plan our personal mid and long term strategy. We spent a lot of time to train our team and I can say that I see the results now – after a year and a half. 

What are your plans to improve on or to introduce into your processes in the next year?

At the beginning of this year I specified 4 goals to reach in 2016. Stabilization of the team is very important to me. I’ve mentioned that we started to use agile project management. We are finishing our continuous integration process. We still do not use automated tests – we want to start with jUnit tests and front-end tests based on Selenium. I hope that we will find a way how to deliver a software in very good quality and in time. Also we have to move forward with the next training of the team. Last but not least – very important goal – is to build up the automation team. In general Muehlbauer Company has a goal to establish a Competence center settled in Nitra – and our Software Engineering team will be an important part of it.

Is your company hiring Java EE programmers?

We are looking for junior and senior developers for Java, .NET, C++ or talented people who want to get into the IT. I have mentioned also our personal strategy – cooperation with the local universities, cooperation with the students. There are also students who want to work after their state exams in their region ~ in Nitra. We want to offer them this opportunity. I have to admit that I see a huge improvement of our students who are working on our projects. Our goal is to have highly professional Research & Development center. Therefore in general, R&D is searching for qualified and high motivated people in IT and engineering.

Anton, thank you for the interview!

This is a copy of the original post I published on my new blog here: Building a new productive team around Java EE 7

Thursday, January 28, 2016

Happy to announce my new blog site!

Although I love blogger.com, it does not provide all the features I'd like to have in my blog page. Therewore I created my new blog site, where I transferred all blog posts from this blog.

You may visit my new blog site at http://itblog.inginea.eu/.

I will be posting there from now on. In fact, there is already a new post about using Facelets in the new MVC framework in the coming Java EE 8 framework.

Wednesday, November 4, 2015

Differences in JPA entity locking modes

JPA provides essentially 2 types of locking mechanisms to help synchronize access to entities. Both mechanisms prevent a scenario, where 2 transactions overwrite data of each other without knowing it.

By entity locking, we typically want to prevent following scenario with 2 parallel transactions:
  1. Adam’s transaction reads data X
  2. Barbara’s transaction reads data X
  3. Adam’s transaction modifies data X, and changes it to XA
  4. Adam’s transaction writes data XA
  5. Barbara’s transaction modifies data X and changes it to XB
  6. Barbara’s transaction writes data XB
As a result, changes done by Adam are completely gone and overwritten by Barbara without her even noticing. A scenario like this is sometimes called dirty-read. Obviously, a desired result is that Adam writes XA, and Barbara is forced to review XA changes before writing XB.

How Optimistic Locking works

Optimistic locking is based on assumption that conflicts are very rare, and if they happen, throwing an error is acceptable and more convenient than preventing them. One of the transactions is allowed to finish correctly, but any other is rolled back with an exception and must be re-executed or discarded.

With optimistic locking, here is possible scenario for Adam and Barbara:
  1. Adam's transaction reads data X
  2. Barbara's transaction reads data X
  3. Adam's transaction modifies data X, and changes it to XA
  4. Adam's transaction writes data XA
  5. Barbara's transaction modifies data X and changes it to XB
  6. Barbara's transaction tries to write data XB, but receives and error
  7. Barbara needs to read data XA (or start a completely new transaction)
  8. Barbara's transaction modifies data XA and changes it to XAB
  9. Barbara's transaction writes data XAB
As you see, Barbara is forced to review Adam's changes, and if she decides, she may modify Adam's changes and save them (merge the changes). Final data contains both Adam's and Barbara's changes.

Optimistic locking is fully controlled by JPA. It requires additional version column in DB tables. It is completely independent of underlying DB engine used to store relational data.

How pessimistic locking works

For some, pessimistic locking is considered very natural. When transaction needs to modify an entity, which could be modified in parallel by another transaction, transaction issues a command to lock the entity. All locks are retained until the end of transaction and they are automatically released afterwards.
With pessimistic locks, the scenario could be like this:
  1. Adam’s transaction reads data X
  2. Adam’s transaction locks X
  3. Barbara’s transaction wants to read data X, but waits as X is already locked
  4. Adam’s transaction modifies data X, and changes it to XA
  5. Adam’s transaction writes data XA
  6. Barbara’s transaction reads data XA
  7. Barbara’s transaction modifies data XA and changes it to XAB
  8. Barbara’s transaction writes data XAB
As we can see, Barbara is again forced to write XAB, which also contains changes by Adam. However, the solution is completely different from optimistic scenario – Barbara needs to wait for Adam’s transaction to finish before even reading the data. Furthermore, we need to issue a lock command manually within both transactions for the scenario to work. (As we are not sure which transaction will be served first, Adam’s or Barbara’s, both transactions need to lock data before modifying it) Optimistic locking requires more setup than pessimistic locking, with version column needed for every entity, but then we do no need to remember issuing locks in the transactions. JPA does all the checks automatically, we only need to handle possible exceptions.

Pessimistic locking uses locking mechanism provided by underlying database to lock existing records in tables. JPA needs to know how to trigger these locks and some databases do not support completely.

Even JPA specification says, that it is not required to provide PESSIMISTIC_READ (as many DBs support only WRITE locks):
It is permissible for an implementation to use LockModeType.PESSIMISTIC_WRITE
where LockModeType.PESSIMISTIC_READ was requested, but not vice versa.

List of available lock types in JPA

First of all, I would like to say that if @Version column is provided within entity, optimistic locking is turned on by default by JPA for such entities. You do not need to issue any lock command. However, at any time, you may issue a lock with one of following lock types:
  1. LockModeType.Optimistic
    • This is really the default. It is usually ignored as stated by ObjectDB. In my opinion it only exists so that you may compute lock mode dynamically and pass it further even if the lock would be OPTIMISTIC in the end. Not very probable usecase though, but it is always good API design to provide an option to reference even the default value.
    • Example:

    LockModeType lockMode = resolveLockMode();
    A a = em.find(A.class, 1, lockMode);


    • LockModeType.OPTIMISTIC_FORCE_INCREMENT
      • This is a rarely used option. But it could be reasonable, if you want to lock referencing this entity by another entity. In other words you want to lock working with an entity even if it is not modified, but other entities may be modified in relation to this entity.
      • Example:
        • We have entity Book and Shelf. It is possible to add Book to Shelf, but book does not have any reference to its shelf. It is reasonable to lock the action of moving a book to a shelf, so that a book does not end up in 2 shelves. To lock this action, it is not sufficient to lock current book shelf entity, as the book does not have to be on a shelf yet. It also does not make sense to lock all target bookshelves, as they would be probably different in different transactions. The only thing that makes sense is to lock the book entity itself, even if in our case it does not get changed (it does not hold reference to its bookshelf).
    • LockModeType.PESSIMISTIC_READ
      • this mode is similar to LockModeType.PESSIMISTIC_WRITE, but different in one thing: until write lock is in place on the same entity by some transaction, it should not block reading the entity. It also allows other transactions to lock using LockModeType.PESSIMISTIC_READ. The differences between WRITE and READ locks are well explained here (ObjectDB) and here (OpenJPA). However, very often, this behaves as LockModeType.PESSIMISTIC_WRITE, as the specification allows it and many providers do not implement it separately.
    • LockModeType.PESSIMISTIC_WRITE
      • this is a stronger version of LockModeType.PESSIMISTIC_READ. When WRITE lock is in place, JPA with the help of the database will prevent any other transaction to read the entity, not only to write as with READ lock.
    • LockModeType.PESSIMISTIC_FORCE_INCREMENT
      • this is another rarely used lock mode. However, it is an option where you need to combine PESSIMISTIC and OPTIMISTIC mechanisms. Using plain PESSIMISTIC_WRITE would fail in following scenario:
        1. transaction A uses optimistic locking and reads entity E
        2. transaction B acquires WRITE lock on entity E
        3. transaction B commits and releases lock of E
        4. transaction A updates E and commits
      • in step 4, if version column is not incremented by transaction B, nothing prevents A from overwriting changes of B. Lock mode LockModeType.PESSIMISTIC_FORCE_INCREMENT will force transaction B to update version number and causing transaction A to fail with OptimisticLockException, even though B was using pessimistic locking.
    To issue a lock of certain type, JPA provides following means:
    You may use any of the two types of lock mechanisms in JPA. It is also possible to mix them if necessary, if you use pessimistic lock of type PESSIMISTIC_FORCE_INCREMENT.

    To learn more, read this excelent blog of Vlad Mihalcea.

    Tuesday, March 17, 2015

    Anybody likes reading com.superframework.core.base.Object object = new com.superframework.core.base.Object() in the code?

    I wonder how many times I have asked myself why Java is so complicated to read and write? Why I have to keep writing so many characters and lines of code to express a simple repetitive task? It's appeared to me like Java language designers keep torturing developers by forcing us to use constructs invented 15+ years ago without an alternative.

    But this one is simply an outrage. Considering that e.g. Python is even older language than Java and you never have to write complicated namespaces in the code more than once in the import. On the contrary, Java does not provide a simple and readable solution when two classes of the same name have to be used in one source file.

    Consider following code comparison.

    In Java:
    import com.myapp.base.Object;
    ...
    com.superframework.core.base.Object frameworkObject = new com.superframework.core.base.Object();
    Object javaObject = new Object();
    In Python:
    import com.superframework.core.base.Object as FrameworkObject
    import com.myapp.base.Object as MyObject
    ...
    def frameworkObject = new FrameworkObject()
    def object = new MyObject();

    No commentary needed, everybody can surely see the difference. Although, you can see the horrible Java code above mostly in generated code, sometimes it is not easy to avoid name clashes, which lead to Java code one would puke from. In this particular example, imported Object can be even easily confused with standard java.lang.Object. In Python, you may name it as you wish to avoid name clashes.

    After many helpful improvements in this direction in Java 7 and Java 8, I strongly recommend also to revise a recommendation raised almost 15 years ago for Java version 1.3:
    - http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4478140

    I understand that 15 years ago adding syntactic sugar to the language was not a top priority. But nowadays, if Java aims to stay the language of choice for millions of developers, it is necessary to consider small improvements like this, which have drastic impact on quality of the code and developers' work.

    Thursday, March 5, 2015

    Decrease your Java IDE lagging by fine tuning JVM Garbage Collector

    Ever wondered why Eclipse/Netbeans keeps pausing for a while every now an then? Especially right at the time you when you want to show something in the code to your dear colleages? It feelt embarrassing and awkward, didn't it?

    I found out that most of the time the IDE pauses because of Garbage Collector execution. The subtle little element in design of JVM, which usually does great job in relieving us developers from worrying about memory consumption, and most people are just happy that it does its job well and ignore it most of the time. However, the consequences of running Garbage Collector may surprise us if we simply ignore it.
    In short, when GC is running, it pauses execution of the application until it is done freeing the memory. This is for sure unacceptable for real-time systems, where Java is certainly not the number one option. But even in non-critical huge desktop applications, which modern Java IDEs certainly are, the GC may stop the whole application for several seconds. And this may happen several times during the day. You can imagine that with productivity tools like IDEs, this is simply dragging down their "productivity" effect.

    A solution is to do some tweaking:
    - increase memory for JVM on which the IDE is running (beware that this only reduces frequency of GC being called, but prolongs the execution time of a single GC run - it takes longer to collect garbage from bigger pile...)
    - switch default GC engine for a more concurrent engine, which tries to collect garbage continually even between stop-everything-until-done executions of complete GC

    The first option is well known to majority of Java programmers - just define reasonable values for MaxPermSize and the family.

    The second option, however, is not so well known. The point is that Oracle Java Hotspot JVM provides several alternative GC engines, which we can choose from. And they, unlike the default one, provide continuous garbage collection even between the big GC executions that slow everything down.

    G1 Garbage Collector

    Since Java 7 update 4, Oracle provides G1 Garbage Collector in JVM.

    You may enable it simply with this command line param:
    -XX:+UseG1GC
    For more info on how to configure G1, see  Java Hotspot VM G1 options.

    CMS Garbage Collector

    In some benchmarks, older CMS collector outperforms the newer G1 collector.

    You may enable it instead of G1 with these options:
    -XX:+UseConcMarkSweepGC

    Special Eclipse tweaking

    GC tweaking really helped to improve performance of my Netbeans installation. However, to be honest, with Eclipse IDE, GC tweaking is only one of many steps to optimize the performance, and it is unfortunately only a minor one. What helps a lot more are tweaks in configuration, like turning some validations in the code, reducing size of console output. In my case, console output was freezing Eclipse so much, that I needed to redirect standard output of my application server to a file and bypass the console completely :(

    Sunday, January 25, 2015

    CDI events in Swing application to decouple UI and event handling

    After having the pleasure building my code around CDI for couple of years, it feels very natural to use it to structure my code according to well-known patterns. CDI is a dependency injection mechanism designed to be used within Java EE application servers, and this could be perceived as a disadvantage. However, I want to show that it can be used and has great potential also in a Java SE application.

    What is great about CDI is that it is much more than an injection mechanism. On top of this it provides also an elegant and powerful event passing mechanism. This feature can be nicely combined with Swing to build a GUI application based on MVC pattern.

    It is really possible to efficiently combine CDI and Swing framework to build a Java GUI application rapidly and with a clear structure. Stay tuned to find out how...

    First of all, the reference implementation of CDI called Weld, is distributed also as a separate library. You may add it to your project and start using it. The only shift from the standard way of running the application is that you need to start a Weld container, which is as simple as this one liner:


    import org.jboss.weld.environment.se.StartMain;
    public static void main(String[] args) {    StartMain.main(args);
    }


    To add Weld into your maven application, just add this dependency: org.jboss.weld.se:weld-se:2.2.9.Final

    To execute your application code, you should put it in a method which observes ContainerInitialized event:

    public void start(@Observes ContainerInitialized startEvent) {
       // code which would be usually in the main() method
    }

    In the method above, you may initialize your application, build and display the GUI and wait for Swing events.

    And here starts the interesting part. I will use CDI event mechanism to implement binding between Swing components and the model using observer pattern. The idea is to trigger custom events whenever a data update should occur and not modify data directly. The controller observes triggered events and executes actions based on the event data. The actions then manipulate the datamodel and send notifications to the view about data updates. See following diagram:



    The MVC cycle starts in Swing action listeners, which compose an action object and emit it as a CDI event. The action listener is not bound to any controller code - the controller is bound to event using the CDI mechanism. This completely decouples GUI code from business logic. Following snippet responds to button click event and emits an action to add a value to a counter:

    @ApplicationScoped
    class MainFrame extends javax.swing.JFrame {
      @Inject Event<ChangeValueAction> changeValueAction;
    ...
      void addButtonActionPerformed(java.awt.event.ActionEvent evt) {
        changeValueAction.fire(ChangeValueAction.plus(getValue()));
      }
    ...
    }

    Here we need to remember that observers of CDI events would be created as new objects for any event triggered, together with all dependencies. I used @ApplicationScoped for MainFrame to ensure that all the code operates on the very same instance of it.

    One thing to mention here: in order for CDI to work, instance of MainFrame must be created by CDI, and not using its constructor directly. This is achieved by injecting it to already existing bean - e.g. the one, which observes ContainerInitialized event emitted at start-up.

    CDI mechanism dispatches the event to any observer method, which listens for this type of event. We create a controller Application and put the code into an observer method, like this:

    public class Application {

    ...
      public void updateValueWhenChangeValueAction(@Observes final ChangeValueAction action) { 
       ... // controller action
      }
    ...
    }

    Finally, the controller updates the model and triggers update of the view if necessary. If we take it further, we may trigger an update event from the controller, which would be observed by the view, in this case the MainFrame component. Or even build a model, which automatically triggers CDI events on update. Thus, controller and view would be completely decoupled and only respond to events - GUI events flowing in the direction from View to Controller, and data-update events flowing from Controller/Model to View.

    In summary, CDI event mechanism is very convenient for building an MVC Swing application with View decoupled from business logic. This can be accomplished by running your application inside the Weld CDI container (1 line of code), triggering actions from Swing listeners (2 lines of code) and observing the actions (single method on any CDI-enabled class). The actions take a form of a data bean, which itself is not too many lines of code altogether.


    A complete example can be found on github: https://github.com/OndrejM/JavaDecoupledUI-CDI



    Additional resources:

    Saturday, November 15, 2014

    Impressions from Geecon in Prague - Day 2

    The day 2 started earlier than the day before. A bit too early for me. While hurrying to catch the beginning of the first presentation, however, a stranger with a big suitcase passed by me in an even greater hurry, a bit confused about which way to take. I grinned to myself as I recollected the familiar face from the speakers' section of Geecon web page. Anyway, speakers are only human too...

    Although latest and greatest themes in Java world these days are reactive programming, alternative languages, HTML5 and microservices, I decided to stay close to the ground at first. I chose jBMP presentation to get an update on how thinggus are moving in the old Java enterprise waters.
    jBMP looks like a vivid yet mature project and it is promisingly evolving under the RedHat umbrella. In fact, the team have a strategy to focus on knowledge, business goals, their visibility and continuous improvement. Does that ring a bell? To me, that sounds quite close to what agile principles adhere to.

    OK,  enough business, we all want some fun too, right? And the next presentation certainly was about how to make fun and even conquer the world with home-made devices. Guys, thank you for bringing the real 3D printer there! It was really a great introduction for a beginner to see how the thing looks like and how it works.
    3Dprinter
    The real 600€ 3D printer on the scene.
    A wooden skeleton with a machinery visible from outside.
    Two things to highlight here: a fully functional 3D printer is as cheap as 600€, and "What may fail will fail" - at least until you learn how to use it properly.
    Some toolset around 3D printers:
    • 3D editors like blender, tinkercad
    • a slicer (Cura, Repetier-Host) providing more or less a functionality of missing 3D printing dialog
    • and not to forget a taft spray - very handy "tool" to improve performance of a printer when the bed is not heated, no kidding
    The iBeacon did not get me yet though. These days it looks like there are many gadgets out there with great potential, but still seeking to find useful applications. iBeacon seems to fall into this category, sorry.

    To let the fun continue, I switched then to the latest and greatest trends. Microservices, here I come! The idea to cut a huge system into small and focused microservices is easy to grasp. Knowing when such architecture really helps and how to design it properly is still dark waters for me. Speakers from a company based in Ostrava came to present their approach and experiences. Well, big European mobile operators under the pressure of the market tend to provide constant flow of change requests. To meet their requirements, microservices architecture seems to be a perfect match . Such systems can be maintained in distributed manner, components redeployed separately in different time-frames, failures don't pull the whole system down. Obvious drawback - transactions are hard to manage across multiple services. But as Neal pointed out, maintaining transactions can be really slow and inefficient. Designing operations as idempotent might come to the rescue - how many times have you checked an already locked door just to be sure? Certainly many times and it didn't cost much.

    Later I found out why, oh my, why that app server eats so much RAM after I redeploy my applications and that there is Plumbr to aid in searching for mem leaks. Yes, classloading is really tricky even in memory consumption.
    Simple rule noted down: always unregister classes you publish to anything on the app server's classpath. For the rest, almighty restart might be cheaper than the time spent on seeking leaks.

    Dukescript then steered my course even deeper into the juicy modern stuff. Another attempt to bring Java everywhere, even if compiled to Javascript. Is it still Java then? Well, it seems it passes the duck test: the source code is Java, it executes in JVM, all the tooling of Java works with it. But.
    The GUI is based on omnipresent HTML5, meaning that no JavaFX nor Swing components are available. Also, under the hood, the runtime obviously varies according to what different platforms have to offer - JVM where available, otherwise Dalvik on Android, RoboVM on iOS, BackToBrowser Javascript VM in modern browsers. Yes, I wrote Javascript VM. Java can really be executed even in a browser without any plugin, just plain Javascript. I have seen it working so it must be true! It still takes some time to download 2MB of scripts, but wait, this is still an unoptimized proof of concept, which already works well! It is another hit nailing Javascript to become an assembler of the web.

    No more time to mention Erlang, Go, Rust, Scala, Kotlin nor Java 8. Luckily Bruce Eckel had a wonderful closing speech about all of them. Even more, he mentioned the driving forces that lead people to design new languages. Better support for parallelism, readability, even purity in design are among them. But purism proved not to be very helpful and mostly did not work. Software transactional memory has not grown to be really usable concept as well. On the other hand, inherent support of parallelism becomes a valuable selling point of modern languages, like Erlang. Even Java 8 has now support for parallel streams. Though, Java 8 stays way too boring. Maybe "Thinking in Kotlin" will be the next book by Bruce Eckel? Anyway, after all that hassle, Python stays Bruce's favourite language - no wonder, more than 20% women in the active community are and will stay another great selling point. Applause to Mr. Eckel for his insights and thank you for reading!