Our application platform appNG has just received a version update. Among other things, we now have a standalone version and we support Tomcat 8 as well as Spring Bean Profiles and HTTP head requests. Additionally, dependency upgrades played a major role in this new version.
Our version 0.11's biggest innovation is support for Tomcat 8. Version 6 still is pretty common (unfortunately), we ourselves have been using 7 so far. "Our" servlet container, though, is being developed continuously, so as to react to new Java features. Thus, I provided the relevant implementations, in fact for Tomcat 7 as well as 8. What interests the Java developer most, here, is which specifications are supported.
Expression Language 3.0
In Tomcat 8, for example, Expression Language has been updated from 2.2 to 3.0, a major version update. The Servlet Spec - updated from 3.0 to 3.1 - now offers asynchronous request handling. Regarding JSP, on the other hand, not too much has happened.
The second big improvement within appNG 0.11 is a standalone version. The idea here being that for evaluation and test purposes, or in order to build a cluster, it could come in handy to not have to install Tomcat, database and such separately. So now, there is this artefact containing everything necessary that can be started from within the command line. Here, too, Tomcat 8 is in use, its embedded version, to be precise - the usual Tomcat JARs are more highly aggregated and compressed here.
Additionally, this standalone contains a repository for the application and the templates, which enables this artefact to run completely on its own. Installation from a remote repository is not necessary, instead, all applications - the web application itself, the standalone JAR as well as a ReadMe - already exist. Thus, it can be started directly in the console.
Regarding the core, we added a few more small things. Support for Spring Bean Profiles, for one - following the idea that one should be able to create various profiles to start the application with. One might want to create a debug profile for a test server. If it logs in, certain debugging web services are executed automatically - something you would not want to happen in a productive environment for security and performance reasons.
Plus, we now support HTTP Head-Requests - those are not as common as, say, "post" and "get"; but the "head" requests is used by search engines and contains certain meta information about a site. This is not about content, but about things like the type, e.g. text/HTML, size in bytes and so forth.
As with every version upgrade, dependency upgrades were a major topic for appNG 0.11 as well. In appNG, we use a good three dozen open source libraries, which themselves are updated regularly, of course. The most important one among them is Spring, it's basically appNG's foundation. So far, we have been on branch three and now went to four.
Lucene, too, underwent a major version update, we are on branch five here now. We updated several smaller ones, too - when it comes to dependencies, we are state-of-the-art now.
And speaking of dependencies, there is a great site, https://www.versioneye.com/, where you upload your POM, your application's part list, as it were. It then tells you which of your dependencies are outdated.
Dependency upgrades are always a great way to test your own code: It might absolutely happen that you upgrade dependencies - and things stop working as a result. Now that might mean that in the past, it worked accidentally and that a bugfix in the library now made it stop doing so. But then, it might also mean that changes to the new version are not downward compatible. Anyway, careful testing is advisable.
Luckily, Semantic Versioning has become pretty common in the meantime. In case of a major version update, indicated by a change to the version number's first digit, a change to the API is likely. If so, you do have to expect compiling errors. Refactoring, then, can cause quite some headache.
On our test environment, for example, I had changed over to Hibernate 5. Consequently, there were no compiling errors, but some failed unit tests. That might mean that Hibernate suddenly does things it didn't do before, which would raise the question: Is it a bug or does the behavior in question match the JPA specification. Option two would be that we are doing things that are not intended but that accidentally, somehow, worked before - then, we'd have to adapt source code. For that exact reason, Hibernate 5 is not part of appNG 0.11 yet.
In our application.xml, we support CLOB (Character Large Object) properties now - long, potentially several lines long strings. An application.xml describes an application's main features, this is where standard configuration values are being stored: roles, authorizations, and said properties, too. In the past, appNG's management interface only supported text input fields. So here, the functionality was extended insofar as now, you can insert longer text units in a CDATA.
The next change goes back to a proposal by our systems administrator Björn Pritzel: The templates have been moved to a database and thus have been made clusterable. Cluster-ability is one of our main requirements in appNG, and that goes for the templates as well as for the applications. Now, Björn won't have to establish any more manual replication mechanisms.
There was some need for action concerning the influence of site reloads to the memory. After a certain amount of site reloads, we had repeatedly run into OutOfMemoryExceptions, as it seemed something had been left behind in the memory. As it turned out, the reason were bugs with Spring Data (DATACMNS-648, DATACMNS-733) - here, static references were recorded.
As appNG uses its own class loaders, but Spring Data was holding static references to a certain class loader, this turned into a problem: As long as one class loader is referenced, it cannot be cleared. The tickets we created were solved by Spring Data's mastermind Oliver Gierke, which enabled us to at least minimize the effect.
New search API
In the course of updating to Lucene 5, we also changed and extended our search API, partly because one of our customers wanted to see global results and, simultaneously, those from a database specific to the appNG application in their search results. So far, due to the different sources, that wasn't possible: the "normal" page content is what's published from the CMS, further content comes from the database. With the new search API, it is possible to search multiple sources at the same time and display all the results combined - either completely mixed or sorted by source - with onboard features.
Document or Search Provider?
This is based on the fact that an application can implement either a document provider or a search provider. In case of the former, once the global indexer wants to go to work, it "asks" every application: "do you have anything to add to my index?", and if, at the time of indexing, an application does have something to add, it can implement this interface. In the latter case (the search provider's), an application has its own Lucene index (which could be the case for several reasons, like if its structure is different or more complex than the global search index's). Then, the application can implement said search provider. Here, appNG says "I'll be searching now" (not "I'll be indexing now"), and only then, it "asks" the application: "I am searching for 'potato', do you have anything to add to this search term?" The results are then sorted due to a certain score.
This being said about changes to the appNG core, we'll now turn to the CLI (command line interface). Here, we now have the option to delete templates and to list all the applications to one site. The idea here was to be able to administer everything that can be administered using the manager by using the CLI, too. If I have to configure a site, I now have a command at my disposal to show me if a certain application is already active at a certain site. Likewise, I can install, deactivate or remove sites per command now. As with administration from the command line, we aim at automating appNG via scripts by this particular change as well.
Along with the appNG standalone, support for placeholders for CLI batch scripts has been improved. In those batch scripts, a file path to a local application repository is needed. That, again, depends on where the zip has been unpacked and on which operating system you currently are. That's why, now, you can reference system properties and system environment variables within an autoinstall.list, which is more or less appNG's installation script.
Plus, there were some smaller changes. For once, we added date headers to the e-mails sent by appNG, since otherwise, that causes problems with some mail clients and practically all spam filters.
As one of our customers had exceeded the number of geocoding requests available for free with Google, they opened an account, which enables them to pose a significantly higher number of requests. Those have to be signed, of course, so that Google can verify them. Among the appNG tools, we now provide a service that sends geocoding requests to Google.
On to the appNG Manager now, its web based management interface. The new functionality "Manage Templates" is related to the fact that the templates, now, are stored in a database. So now, I can see in a new tab, which templates are installed, can install new ones, edit them and so on - same as with the applications.
Next topic: filter for sessions. This feature was created because, with some customers, by now we have 5000 and more sessions Now if you are searching for one specific session, there is that trick to set the chunk size to 5000 in order to see all the 5000 at the same time - but even then, you still have to search for the one manually. From 0.11 on, you can now filter by several criteria, like for example ID, user agent or others.
The next point focuses on usability: "remove property prefix". An application's properties have very long prefixes, which "only" reflect implementation details, though. The prefix doesn't help the user in any way, on the contrary, it might even confuse them, as the really important part is to be found at the very end. That is not nice and not practical, so I removed the property prefix.
One of the basic ideas behind this new version was to improve cluster communication. Until now, the problem with cluster communication was that it used multicast, which only works correctly if all the computers are located in the same network. But since a situation where this is not the case, is well possible, our colleague Claus Stümke evaluated a message oriented middleware called RabbitMQ. Technically, it works based on events. One event asks the other nodes something like "Tell me how you are". All the nodes will answer with their respective system status. If I reload a site, I write that into the queue as an event. The other nodes will receive that event and, consequently, reload the site in question, too.
Related to this is the feature "apply to all cluster nodes". The Log4J configuration is located in the file system (which, actually, is the first disadvantage right there, we should think about putting it into the database), and thus, one node initially doesn't know if I change the other one's log configuration. With "apply to all cluster nodes", the information that and how the log configuration has changed is sent out through the messaging as an event. Consequently, the new Log4J configuration will be activated on all the other nodes, too.
This is an advantage in that, conceivably, one might want to increase logging only on one node. Then, it does make sense to not automatically apply it to all the clusters. We find this very chic, especially since communication doesn't use multicast, but normal unicast IP addresses, which means the whole thing can be routed, too, and can happen in different networks or even in different data centers.
Redis Session Store
Furthermore, we now support Redis as session store. So far, we only had the option to maintain sessions per Tomat in a hash map in the memory. Thus, we had to configure "StickySession" on the load balancers, the sessions were not replicated within the cluster. Thanks to Redis support it is now possible to store every session on an external Redis server and to read it from there, too. Thus, which node the user request lands on becomes irrelevant. On the load balancer, we can now configure another strategy, like for example round-robin. By turning the individual appNG nodes stateless, as it were, when it comes to user sessions, we increase appNG's availability and fail-safety: even if we keep using StickySession - combined with Redis as session store - and the server where the session has been held goes down the drain, from now on the session is available on another node, too. The user won't even notice anything. The overhead is only around five to twelve milliseconds per request and I think this will make us considerably more fault-tolerant.
And last but not least, there are changes in the security department. For one, the problem with session fixation has been eliminated. After logging in, the user receives a new HTTP session. Plus, it is now possible to activate a filter against CSRF attacks. This one uses the Synchronizer token pattern.