Thursday, March 31, 2011

Java update breaks jps, jconsole, etc

The latest update of Java broke jps and jconsole when Java is launched specifying "-Djava.io.tmpdir" with a value of anything other than $TMPDIR. This affects Virgo and Tomcat.

The problem was introduced in Java 6 update 21 and is fixed by bug sunbug 7009828 in Java 6 update 25.

A workaround for Virgo is as follows. On Mac and Linux to delete the following line in dmk.sh:

-Djava.io.tmpdir=$TMP_DIR \
and on Windows to delete the following line in dmk.bat:
set KERNEL_JAVA_PARMS=%KERNEL_JAVA_PARMS% -Djava.io.tmpdir="%TMP_DIR%" 

Eclipse Virgo diagnostics for "uses" conflicts

Congratulations to Felix on producing some really nice diagnostics for uses conflicts, and thanks to Neil Bartlett for his blog on the subject.

I thought I'd try a similar example in Virgo for comparison. Here are Virgo's diagnostics:

Unable to satisfy dependencies of bundle 'org.example.foo.xydata' at version '0.0.0': Cannot resolve: org.example.foo.xydata
    Resolver report:
        Uses violation: ⟨Import-Package: org.example.foo.bar.datastore; version="0.0.0"⟩ in bundle ⟨org.example.foo.xydata_0.0.0[1301563814803]⟩
            Found conflicts:
                package        'org.example.foo.util.verify_1.0.0' in bundle 'org.example.foo.platform.util_0.0.0[1301563814806]' used by 'org.example.foo.bar.datastore_0.0.0' in bundle 'org.example.foo.bar.datastore_0.0.0[1301563814805]'
                conflicts with 'org.example.foo.util.verify_2.0.0' in bundle 'org.example.foo.framework_0.0.0[1301563814804]' imported by bundle 'org.example.foo.xydata_0.0.0[1301563814803]'

(The numbers in square brackets are bundle ids - they are enormous because Virgo is attempting resolution in a "side state". Bundles with enormous ids have either been deployed directly or auto-installed from the Virgo repository and are in the process of being resolved. They are committed to the OSGi framework only if the directly deployed bundle(s) can be resolved successfully.)

The Virgo formatting isn't quite as pretty as Felix's, but it looks reasonable on a wide terminal window. However, there are some subtle differences:
  • Felix dumps out the dependency chains whereas Virgo does not attempt that. We had in mind examples where the dependency chains are unmanageably long, i.e. those where good diagnostics really are essential. But having seen Felix's diagnostics, we would like to dump out the chains in simple cases. On the other hand, that area of code is pretty complex, so this is not something we'll be rushing to do.
  • Felix depicts wirings from import to export which could be especially useful for newcomers to OSGi.
  • The Felix diagnostics are built in to the framework whereas non-Virgo users of Equinox do not get the Virgo diagnostics.
  • The Virgo diagnostics highlight the fact that there is a uses violation whereas Felix leaves the user to infer that from the rest of the message. It would be nice if Felix was more explicit because there are other reasons a bundle cannot be resolved.
  • Felix highlights the duplicated package whereas Virgo highlights the import that encountered the uses constraint. Virgo is designed to cope well when there are multiple duplicated packages. It's not clear what the Felix diagnostics would look like in such cases.
Note: on a resolution failure Virgo dumps the "side state" to disk for offline analysis. With this example, the offline analysis produced poorer diagnostics than were printed to the console, so I raised bug 341460 to investigate. The beauty of offline analysis is it lets you explore the details of the bundles. For example, the following screenshots shows the details of the duplicated package, including the import version ranges which result in the attempt to use the distinct versions of the duplicated package:



Anyway, I'm really pleased that we are starting to see some stiff competition on diagnostics. This will help OSGi users enormously.

Postscript
Bug 341460 is fixed in the Virgo 3.0 line. The admin console now displays the same diagnostics that were printed to the console.

Wednesday, March 30, 2011

Enterprise OSGi Runtimes and Tooling

It was great to see several enterprise OSGi runtimes in action at EclipseCon.

Readers of this blog will be familiar with Eclipse Virgo, so suffice to say that tutorials covered developing  applications using Eclipse Gemini components (reference implementations of several OSGi Enterprise standards) in Virgo and configuring and managing web applications in Virgo.

Several of the tutorials developed Web Application Bundles, or WABs, which are part of the OSGi Enterprise standard.  A Web Application Bundle is like a WAR file which specifies the web context path in its manifest and which obtains its dependencies using OSGi instead of having to include them all in WEB-INF/lib. The tutorials also implemented persistence using the OSGi Enterprise standard mapping of Java Persistence API (JPA).

Apache Aries provides a collection of components including implementations of several Enterprise OSGi standards and an application model similar to Virgo's. The EclipseCon tutorial had us creating a blogging application using the free IBM® Rational® tooling. The application consisted of an Aries application containing a plain OSGi bundle, a bundle utilising JPA persistence, and a WAB. The JPA bundle used Aries extensions to the standard blueprint service. The WAB used JNDI to look up the service published by the persistence bundle - the same approach that was taken in the Gemini and Virgo tutorial. We then deployed the application to a target platform which included Aries bundles.

Apache Karaf is similar in concept to the Virgo kernel. Karaf has several functions that Virgo lacks, some of which are good candidates to add to Virgo. Apache Geronimo is based on Karaf.

GlassFish now has good support for OSGi. The EclipseCon tutorial started with us developing a basic client bundle calling a service published by a back-end bundle. We then replaced the client bundle with a WAB and the back-end bundle by a JPA persistence bundle which published a service in the form of an EJB. The WAB used custom annotations to inject the service into the WAB's servlet. The EJB was used to demarcate transactions and security. The M2Eclipse "core" and "extras" plugins provided a basic Eclipse development environment, although the bundles were deployed via the command line. Kudos to the GlassFish team, including Arun Gupta and Sanjeeb Sahoo who led the tutorial, for possibly the most polished hands-on tutorial I've experienced.

I was also very pleased to see the Eclipse Libra project making progress. The version destined for inclusion in this year's Eclipse release train can be used to develop WABs. Runtime launching support is in the pipeline. We also hope to migrate standards based features from the Virgo tooling into Libra.

git tip: listing or editing the commits on a branch

Sometimes I need to list all the commits on a branch. I found the following command does the trick, even after I have merged from master into the branch one or more times:

git rev-list my-branch ^master

This lists all the commits reachable from my-branch but not reachable from master.

You can also change the committer or author info of commits on a branch by specifying  my-branch ^master as the on git filter-branch (see git help filter-branch). 

Tuesday, March 29, 2011

Quick survey of Virgo and SpringSource dm Server usage

I would be grateful if users of Virgo and SpringSource dm Server would take a short, anonymous survey to help us in our planning. Should take no more than 2 minutes to complete.

Correction: Oracle do not contribute to Virgo

A recent Oracle blog stated:

Also presented were Eclipse projects, that Oracle engineers are contributing to: Gemini, Virgo, Sapphire and Web Tool Platform.
I submitted a comment on the blog pointing out that Oracle engineers contribute to Gemini but not Virgo. Unfortunately, the comment was not published and the blog was not corrected so perhaps this blog will help to set the record straight.

That said, Virgo would welcome contributions from Oracle (and others). We've enjoyed collaborating with Oracle in the Gemini project and we really appreciate having Virgo committers from three different companies and a few more would be even better.

Monday, March 21, 2011

Incremental OSGi with Virgo, Gemini, and Libra

As I anticipate the start of EclipseCon 2011 later today, I'm reflecting on the current state of the Eclipse Virgo, Gemini, and Libra projects.

As I'll be discussing on Tuesday, Virgo evolved out of a relatively mature project whose aim was to make it easy for enterprise applications and enterprise application developers to adopt OSGi. The goal was to make the transition to OSGi incremental. Essentially you can start with a standard WAR file and deploy it to Virgo as-is. Then you can incrementally refactor the WAR file into an application comprising multiple OSGi bundles while continuing to use familiar technologies such as Spring, Hibernate, JPA, etc. We based Virgo on Tomcat suitably embedded into OSGi, again to provide familiarity for enterprise developers as well as systems administrators.

In parallel with the evolution of Virgo, the OSGi Alliance Enterprise Expert Group, being discussed on Tuesday, produced a series of enterprise specifications again with the goal of enabling enterprise applications to be migrate straightforwardly to OSGi. The reference implementations of these specifications became today's Eclipse Gemini project:

  • Gemini Blueprint - an OSGi standard dependency injection container which also supports the popular Spring and Spring DM namespaces
  • Gemini Web - support for servlets in the form of OSGi standard Web Application Bundles (WABs) or standard WAR files
  • Gemini JPA - support for JPA persistence in OSGi
  • Gemini DBAccess - modularised JDBC drivers for use in OSGi
  • Gemini Management - JMX management for OSGi
  • Gemini Naming - support for JNDI naming in OSGi
Until recently, integrating the Gemini projects into Virgo has been a fairly specialised skill. As we'll see in a workshop on Thursday, things are now really starting to come together and Gemini Blueprint, Gemini Web, and various other Gemini components can now be deployed and used in Virgo. We'll also be touching on how to manage and configure WABs in Virgo as part of a workshop later this morning.

So much for the past. What about the future? The primary goal of Virgo 3.0 is better integration with other EclipseRT technologies such as Equinox, Jetty, and p2. This will provide richer function and greater choice for enterprise applications as they migrate to OSGi. We'll be hearing about how Virgo and EclipseRT technologies work together on Wednesday. The recently released third milestone of this release shows the progress towards the goal of better integration with EclipseRT:
  • Virgo now uses Equinox Configuration Admin and Event Admin services instead of their Apache equivalents.
  • Virgo has moved to the new framework hooks model for isolation in Equinox which is part of the OSGi 4.3 standard soon to be issued for public review.
  • Virgo now provides two web servers: Virgo Tomcat Server, which continues to be based on Gemini Web, and Virgo Jetty Server. Some exciting technology for modularising the view portion of web applications is also emerging into the Virgo mainstream as we'll hear on Wednesday.
  • p2 support in Virgo is also making good progress although this is currently in prototype form.
Development tooling is also entering an exciting new phase which will greatly help applications migrate to enterprise OSGi:
  • The Libra project, which will be presented on Wednesday, is providing development tooling for building WABs and will ship in the Eclipse Indigo release train later this year.
  • The Virgo development tooling has entered the Eclipse IP process and will soon emerge in the Virgo project. Over time, the standards-based features of the Virgo tooling will be factored out and migrated into Libra.
  • OSGi runtime launchers are also being contributed to Libra. I would hope to see a Virgo launcher for Libra emerge so that Libra can start to take over from the Virgo tooling for many aspects of enterprise OSGi application development.
If you're attending EclipesCon, come and hear the talks, join in the workshops and BoFs and help to shape incremental OSGi in the Virgo, Gemini, and Libra projects.

Friday, March 18, 2011

Importing packages into a 3rd party bundle

I regularly come across the situation where someone needs to import additional packages into a 3rd party OSGi bundle. Sometimes this is because the bundle manifest is incorrect, but more often it's because the bundle uses Class.forName or similar to load classes that are somehow configured at runtime.


Unless the bundle is signed, it's possible to edit the manifest to add the import, but this is a really bad idea. If the original bundle needs upgrading, you have to remember to re-edit the manifest. Also, it's easy to mess something up in the process. Finally, it's bad practice to change the contents of a bundle without modifying the bundle version.

Thankfully, there is a simple way to add imports to a bundle without resorting to such invasive surgery: use a fragment bundle. Fragment bundles can extend the manifest of their host in various ways. In particular, they can add package imports. The fragment bundle need only contain a manifest, for example:


Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-SymbolicName: my.fragment
Bundle-Version: 1
Bundle-Name: My Fragment
Fragment-Host: host.symbolic.name;bundle-version="[2.4,2.5)"
Import-Package: extra.package;version="[1.1,1.2)"

More than one fragment bundle can attach to a given host bundle, so this technique can be used when unrelated groups of packages need to be imported: a separate fragment can be used for each group.

Each fragment bundle needs to be installed before its host is resolved at which point the fragment's manifest is merged into the host's and the resultant Bundle instance imports the packages added by the fragment. On Virgo, this can be achieved by placing the fragment bundles in repository/usr. When the host is deployed, the fragments are automatically installed along with the host.

Note: make sure you use the "bundle-version" attribute of Fragment-Host. "version" is not a standard attribute of Fragment-Host and if you use it accidentally in place of "bundle-version", chances are your fragment will not attach to its host.

Thursday, March 17, 2011

Virgo 3.0.0.M03 and EclipseCon

Milestone 3 of Virgo 3.0 is available for download. Apart from several bug fixes, this milestone includes a contribution from Tom Watson who is collaborating with the Virgo team to prepare the region digraph for its move to Equinox so that it is available to all Equinox users rather than just Virgo users.

We will be using milestone3 in tutorials at next week's EclipseCon. If you're attending, watch out for Virgo committers delivering the following sessions:

Hope to see you there!

Monday, February 28, 2011

Thread Context Class Loading in Virgo

A number of popular open source libraries, particularly persistence providers, use thread context class loading to load application types. This is an issue in OSGi once an application is divided into more than one bundle. Each bundle has its own class loader and no particular one of these bundle class loaders is necessarily suitable for loading all the application classes that libraries need to load.

Virgo overcomes this problem with its notion of a scoped application. A scoped application consists of either a PAR archive file containing the application artefacts (bundles, property files, plans, etc.) or a scoped plan referring to the application  artefacts. The scope limits the visibility of application packages and services so that scoped applications do not interfere with each another. But Virgo also creates an additional synthetic context bundle for each scope and it's this bundle's class loader which is used for thread context class loading when the application calls out to libraries.

The synthetic context bundle is very simple. It's a bundle which uses Virgo's import-bundle manifest header to import each of the bundles in the scope. Thus all the packages exported by the application are available for thread context class loading by libraries. There is a minor restriction implied by this approach: no two bundles in a scoped application may export the same package, which would be pretty unusual.

The following example should help make things clear:

An Example Scoped Application
Here a scoped application contains two bundles A and B. A calls B which calls out to a library bundle L. If L uses the thread context class loader to load application types, it will have access to all the packages exported by bundles A and B. The synthetic context bundle, which imports bundles A and B, provides the thread context class loader for the scoped application.

This simple solution enables a variety of existing open source libraries, once converted to OSGi bundles, to function correctly in their use of thread context class loading.

Thursday, February 10, 2011

Stumbling towards a better design

Some programs have a clear design and coding new features is quick and easy. Other programs are a patchwork quilt of barely comprehensible fragments, bug fixes, and glue. If you have to code new features for such programs, you're often better off rewriting them.

However there is a middle ground that I suspect is pretty common in these days of clean code and automated test suites: you have a good program with a clear design, but as you start to implement a new feature, you realise it's a force fit and you're not sure why. What to do?

I've recently being implementing a feature in Eclipse Virgo which raised this question and I hope sheds some light on how to proceed. Let's take a look.


The New Feature
I've been changing the way the Virgo kernel isolates itself from applications. Previously, Equinox supported nested OSGi frameworks and it was easy to isolate the kernel from applications by putting the applications in a nested framework (a "user region") and sharing selected packages and services with the kernel. However, the nested framework support is being withdrawn in favour of an OSGi standard set of framework hooks. These hooks let you control, or at least limit, the visibility of bundles, packages, and services -- all in a single framework.

So I set about re-basing Virgo on the framework hooks. The future looked good: eventually the hooks could be used to implement multiple user regions and even to rework the way application scoping is implemented in Virgo.


An Initial Implementation
One bundle in the Virgo kernel is responsible for region support, so I set about reworking it to use the framework hooks. After a couple of weeks the kernel and all its tests were running ok. However, the vision of using the hooks to implement multiple user regions and redo application scoping had receded into the distance given the rather basic way I had written the framework hooks. I had the option of ignoring this and calling "YAGNI!" (You Ain't Gonna Need It!). But I was certain that once I merged my branch into master, the necessary generalisation would drop down the list of priorities. Also, if I ever did prioritise the generalisation work, I would have forgotten much of what was then buzzing around my head.


Stepping Back
So the first step was to come up with a suitable abstract model. I had some ideas when we were discussing nested frameworks in the OSGi Alliance a couple of years ago: to partition the framework into groups of bundles and then to connect these groups together with one-way connections which would allow certain packages and services to be visible from one group to another.

Using Virgo terminology, I set about defining how I could partition the framework into regions and then connect the regions together using package, service, and bundle filters. At first it was tempting to avoid cycles in the graph, but it soon became clear that cycles are harmless and indeed necessary for modelling Virgo's existing kernel and user region, which need to be connected to each other with appropriate filters.


A Clean Abstraction
Soon I had a reasonably good idea of the kind of graph with filters that was necessary, so it was tempting to get coding and then refactor the thing into a reasonable shape. But I had very little idea of how the filtering of bundles and packages would interact. In the past I've found that refactoring from such a starting point can waste a lot of time, especially when tests have been written and need reworking. Code has inertia to being changed, so its often better to defer coding until I get a better understanding.

To get a clean abstraction and a clear understanding, while avoiding "analysis paralysis", I wrote a formal specification of these connected regions. This is essentially a mathematical model of the state of the graph and the operations on it. This kind of model enables properties of the system to be discovered before it is implemented in code. My friend and colleague, Steve Powell, was keen to review the spec and suggest several simplifications and before long, we had a formal spec with some rather nice algebraic properties for filtering and combining regions.

To give you a feel for how these properties look, take this example which says that "combining" two regions (used when describing the combined appearance of two regions) and then filtering is equivalent to filtering the two regions first and then combining the result:


Being a visual thinker and to make the formal spec more useful to non-mathematicians, I also drew plenty of pictures along the way. Here's an example graph of regions:


A New Implementation
I defined a RegionDigraph ("digraph" is short for "directed graph") interface, implemented it, and defined a suit of unit tests to give good code coverage. I then implemented a fresh collection of framework hooks in terms of the region digraph and then ripped out the old framework hooks and code supporting what in retrospect was a poorly formed notion of region membership and replaced this with the new framework hooks underpinned by the region digraph.


I Really Did Need It (IRDNI?)
It took a while to get all the kernel integration tests running again, mainly because the user region needs to be configured so that packages from the system bundle (which belongs in the kernel region) are imported along with some new services such as the region digraph service.

As problems occurred, I could step back and think in terms of the underlying graph. By writing appropriate toString methods on Region and RegionDigraph implementation classes, the model became easier to visualise in the debugger. This gives me hope that if and when other issues arise, I will have a better chance of debugging them because I can understand the underlying model.

A couple of significant issues turned up along the way, both related to the use of "side states" when Virgo deploys applications.

The first is the need to temporarily add bundle descriptions to the user region.

The second is the need to respect the region digraph when diagnosing resolver errors. This is relatively straightforward when deploying and diagnosing failures. It is less straightforward when dumping resolution failure states for offline analysis: the region digraph also needs to be dumped so it can also be used in the offline analysis.

These issues would have been much harder to address in the initial framework hooks implementation. The first issue would have involved some fairly arbitrary code to record and remove bundle descriptions from the user region. The second would have been much trickier as there was a poorly defined and overly static notion of region membership which wouldn't have lent itself to including in a state dump without looking like a gross hack. But with the region digraph it was easy to create a temporary "coregion" to contain the temporary bundle descriptions and it should be straightforward to capture the digraph alongside the state dump.

Ok, so I'm convinced that the region digraph is pulling its weight and isn't a bunch of YAGNI. But someone challenged me the other day by asking "Why do the framework hooks have to be so complex?".

Unnecessary Complexity?
Well, firstly the region digraph ensures consistent behaviour across the five framework hooks (bundle find, bundle event, service find, service event, and resolver hooks), especially regarding filtering behaviour, treatment of the system bundle, and transitive dependencies (i.e. across more than one region connection). This consistency should lead to fewer bugs, more consistent documentation, and ease of understanding for users.

Secondly, the region digraph is much more flexible than hooks based on a static notion of region membership: bundles may be added to the kernel after the user region has been created, application scoping should be relatively straightforward to rework in terms of regions thus giving scoping and regions consistent semantics (fewer bugs, better documentation etc), and multiple user regions should be  relatively tractable to implement.

Thirdly, the region digraph should be an excellent basis for implementing the notion of a multi-bundle application. In the OSGi Alliance, we are currently discussing how to standardise the multi-bundle application constructs in Virgo, Apache Aries, the Paremus Service Fabric, and elsewhere. Indeed I regard it as a proof of concept that the framework hooks can be used to implement certain basic kinds of multi-bundle application. As a nice spin-off, the development of the region digraph has resulted in several Equinox bugs being fixed and some clarifications being made to the framework hooks specification.


Next Steps
I am writing this while the region digraph is "rippling" through the Virgo repositories on its way into the 3.0 line. But this item is starting to have a broader impact. Last week I gave a presentation on the region digraph to the OSGi Alliance's Enterprise Expert Group. There was a lot of interest and subsequently there has even been discussion of whether the feature should be implemented in Equinox so that it can be reused by other projects outside Virgo.

Postscript (30 March 2010)
The region digraph is working out well in Virgo. We had to rework the function underlying the admin console because there is no longer a "surrogate" bundle representing the kernel packages and services in the user region. To better represent the connections from the user region to the kernel, the runtime artefact model inside Virgo needs to be upgraded to understand regions directly. This is work in progress in the 3.0 line.

Meanwhile, Tom Watson, an Equinox committer, is working with me to move the region digraph code to Equinox. The rationale is to ensure that multiple users of the framework hooks can co-exist (by using the region digraph API instead of directly using the framework hooks).

Tom contributed several significant changes to the digraph code in Virgo including persistence support. When Virgo  dumps a resolution failure state, it also dumps the region digraph. The dumped digraph is read back in later and used to provide a resolution hook for analysing the dumped state, which ensures consistency between the live resolver and the dumped state analysis.

Thursday, November 25, 2010

How to develop with the Virgo Kernel in Eclipse

A number of groups are discovering that the Virgo Kernel is a really useful OSGi runtime. For instance the Rich Ajax Platform project have documented how to run RAP on the kernel.

Others have steered clear of the kernel because the Eclipse-based Virgo tooling does not support the kernel directly (see bug 331101). However, it's really easy to get the tooling working with the kernel. Here's how...

First, set up the Virgo tooling in Eclipse as described in the Tooling chapter of the Virgo Programmer Guide.

However, when you create a new Virgo Web Server server runtime environment and point at a Virgo kernel installation, you'll see the following error:

.version file in lib directory is missing key 'virgo.server.version'. Make
sure to point to a Virgo Server installation.


















Edit lib/.version in the kernel installation to contain the key 'virgo.server.version' instead of 'virgo.kernel.version'. Then the above error goes away and you can continue to set up the runtime environment and server as per the instructions in the Programmer Guide.

You can then use the Virgo tooling to do all the usual things: start and stop the kernel, create and deploy bundles, PARs and plans, examine bundles, and explore package and service wirings.

Thursday, November 11, 2010

OSGi Adoption Experience

The Mule team have blogged about their decision not to adopt OSGi. The decision seems reasonable: OSGi isn't a panacea and I guess the benefits didn't outweigh the costs in their case.

One of their number, Ross Mason, correctly identifies some of the costs of adopting OSGi. He would like OSGi to be almost completely invisible to the developer, leaving him or her free to define an application in terms of modules without dealing directly with manifests, bundles, and build systems. The somewhat messy details of developing and building modular applications with OSGi are partly what led SpringSource to donate the dm Server project to Eclipse as Virgo. The aim is to grow the user community and smooth out the wrinkles in the developer experience.

Adoption is going pretty well. Virgo recently shipped its first release. SAP are adopting Virgo in a strategic way. VMware products in development are using dm Server/Virgo to obtain the benefits of modular application code. Others are using dm Server/Virgo to host substantial applications in sectors including investment banking, networking, healthcare, online gaming, and academic research.

As we prepare to donate the Virgo tooling to Eclipse and collaborate in the OSGi Enterprise Tools project inside Eclipse with the likes of SAP, EclipseSource, and Eteration, I'm optimistic about improving the developer experience.

Dissenting voices such as those of the Mule team are a good reality check. Efforts, such as Mule, to try to find intermediate modularity solutions with some of the benefits of OSGi but with smaller costs should also help by providing data points for comparison with full blown OSGi adoption.

I'm looking forward to more experience reports from application development groups that have adopted OSGi or technologies like Mule so others can make a more informed decision about the technology to adopt. Certainly OSGi users are starting to publish their experience. Perhaps users of Mule will soon do the same.

Monday, November 01, 2010

Eclipse Virgo 2.1.0 Release

Jump to Hyperspace
Jump to Hyperspace, courtesy of Steve Jurvetson
Eclipse Virgo has shipped its first release after several months of work to contribute it to Eclipse.

The majority of the effort was to make the contribution acceptable from an Eclipse legal perspective. There were some close shaves, like when a file (mime.types) in the Spring Framework turned out to have a restrictive license. The file would have been virtually impossible to recreate from scratch. I am very grateful to the original author, Ian Graham, for solving this problem by making the content available under an Apache license. I would also like to thank Barb Cochrane and Sharon Corbett of the Eclipse legal team who cheerfully and efficiently handled over 200 "contribution questionnaires" covering the code donated to Virgo and its dependencies.

Apart from that, we have repackaged the code in the org.eclipse namespace, speeded up the startup considerably, upgraded several dependencies, fixed a few bugs, and made several enhancements.

The response from the community was particularly encouraging with thousands of downloads of the milestones and numerous forum posts and mailing list discussions. We are particularly grateful to the code contributors, listed in the release notes.

The main objective of contributing Virgo was to grow the community and thereby make the technology more easily consumed by application developers. The project seems to be meeting the community objective and we are now preparing to donate the Virgo tooling to Eclipse which will enable the community to help us make the developer experience smoother, especially for newcomers to modularity and OSGi.

Finally, I'd like to thank the other Virgo committers, Chris Frost and Steve Powell, for their persistence and thoroughness. I'm glad we can now move on to more interesting items for the next release. Thanks too to everyone else in SpringSource who supported the Virgo project in various ways. We couldn't have done it without you.

The future looks promising, starting with a Virgo face to face meeting at the end of this month. It's not too late to express your interest (on virgo-dev) if you would like to attend.

Tuesday, October 12, 2010

Eclipse Virgo: the ideal OSGi server runtime

Since the Eclipse Virgo web site is seeing more new visitors than ever, it seemed like a good time to summarise the benefits of Virgo.

Essentially Virgo is the culmination of 3 years of work in developing an application server based entirely on OSGi and exposing OSGi for use by applications. Virgo started inside SpringSource as the dm Server project which was licensed under the GNU Public License and shipped two major versions. About a year ago we started donating the codebase to Eclipse as the Virgo project licensed under the much more liberal Eclipse Public License. The transfer of the runtime code to Eclipse is now complete, we have satisfied the rather stringent IP checks required of all Eclipse projects, and Virgo is nearing its first formal release by the end of October.

Virgo is an ideal OSGi server runtime, possibly the ideal OSGi server runtime, from several perspectives. Firstly, it was designed from the ground up as OSGi bundles so it's extremely modular and extensible. For example, the core of the runtime is the Virgo kernel and the Virgo web server is constructed by configuring a web layer, an admin console, and some other utilities on top of the kernel. The kernel protects itself from interference from such additions, and applications, by running them in a nested framework. This nested framework approach also enables applications to use new versions of the Spring framework (regardless of the fact the kernel depends on Spring 3.0.0). All the major Java EE application servers are now built on top of OSGi, but only Virgo was designed for OSGi and didn't need OSGi to be retrofitted.

Secondly, Virgo is an ideal OSGi server runtime because it exposes OSGi in a usable way to applications. In addition to all the standard features of OSGi — encapsulated modules or bundles, a powerful service registry, versioning, class space isolation, sharing of dependencies, dynamic refreshing and updating, and much more — Virgo provides a multi-bundle application model to simplify the deployment and management of non-trivial applications. Virgo also provides a repository which can store dependencies such as OSGi bundles which are "faulted in" when needed. This results in cleaner definitions of applications and a small footprint compared to traditional Java EE servers which pre-load many features just in case an application needs them. The major Java EE application servers are beginning to follow suit in exposing an application model, so SpringSource is working with other vendors in the OSGi Alliance to produce a standard multi-bundle application construct.

Thirdly, Virgo is an ideal OSGi server runtime because it enables existing Java libraries to run successfully in an OSGi environment. The Virgo kernel supports thread context class loading, load time weaving, classpath scanning, and a number of other features which are commonly used by persistence providers and other common Java utilities. Essentially, we observed the commonly-occurring problems when people attempted to migrate to OSGi and implemented general solutions to those problems early on.

Fourthly, Virgo is an ideal OSGi server runtime because it integrates OSGi with Spring and Tomcat. Virgo uses Spring DM to wire application contexts to the OSGi service registry. Spring beans can be published as OSGi services and can consume OSGi services, both with minimal effort. The embedded form of Tomcat is used as a servlet engine in Virgo's web support and is configured and managed just like standard Tomcat.

Fifthly, Virgo is an ideal OSGi server runtime because of its excellent diagnostics:

  • An admin console lets you examine the state of all artifacts deployed in Virgo as well as explore the state of bundles in the OSGi framework.
  • Virgo provides multiple types of diagnostics: event logging aimed at administrators, diagnostics logging aimed at developers, as well as various types of diagnostic dumps.
  • Virgo builds on Logback to support highly configurable and efficient logging.
  • When an application is deployed, Virgo first resolves the application in a "side state" and, if resolution fails, no changes are committed to the OSGi framework and the side state is dumped to disk so that it can be analysed using the OSGi state inspector in the admin console.
  • If a resolution failure occurs, Virgo also analyses the resolver error, including the root causes of any uses constraint violation, and extracts a meaningful message to include in an exception which is then thrown.
  • Virgo adds advanced diagnostics to Spring DM to track the starting of bundles and their application contexts and issue error messages on failure and warnings when dependencies are delayed.
  • Virgo automatically detects deadlocks and generates a thread stack dump.
Finally, Virgo is an ideal OSGi server runtime because it has advanced tooling support in the SpringSource Tool Suite or in standard Eclipse via an update site. This enables a Virgo server to run under the control of the tooling, applications to be deployed, debugged, and updated by the tooling, and package and service dependencies between bundles to be analysed. The Virgo tooling will also be donated to Eclipse in due course. Oh, I almost forgot to mention the obvious: Virgo is an open source project with a liberal license and active participation from multiple vendors, which positions it ideally for the future.

Footnote: The bulk of this article was originally posted to springsource.org. Reproducing here for the benefit of Planet Eclipse and others.

Thursday, September 30, 2010

I Love OSGi, Now Please Change It!

I'm sitting in the OSGi DevCon where a panel are discussing what needs to change in OSGi. Several points are coming up that are relevant to OSGi standards and also to what we have been doing in Virgo.

One excellent question was posed by a panel member, IBM Distinguished Engineer Ian Robinson, who asked whether OSGi needs to distinguish better between APIs intended for applications and APIs intended for middleware implementers and how this squares with the OSGi tendency for all bundles to be created equal.

I believe the answer here is to use composite bundles or nested frameworks to provide a unit of modularity larger than an individual bundle. I don't think OSGi needs to prescribe which APIs fall into each category, but it should provide mechanisms for hiding APIs in a given environment which are not intended for application use.

Virgo uses an early prototype of nested frameworks to achieve this kind of isolation between the Virgo kernel and the user region, which is a nested framework in which applications run. Users are free to deploy bundles in the user region and can even install additional bundles into the kernel and configure the packages and services shared between the kernel and the user region.

Fortunately, the notion of composite bundles and nested frameworks is being actively developed in the OSGi standards work and so their mechanisms should become more broadly available in a standard form in due course.

There is also OSGi standards work going on to expose these mechanisms in a usable way to applications. Virgo provides plans and plan archives as a way of grouping together artefacts for deployment and management. Apache Aries has also been implementing some similar grouping mechanisms with their application model. Paremus and others have similar constructs. So the standardisation activity has plenty of implementation experience and exposure to users to inform it. Virgo has also introduced a scoping mechanism as a way of isolating groups of application bundles from other groups of application bundles. This will also feed into the standards work.

Another great question from another panel member, the OSGi consultant Neil Bartlett, was how much OSGi should accommodate and support legacy code and how much it should require legacy code to be restructured to be more modular.

The direction of the Enterprise Expert Group in recent years has been to accommodate porting Java EE applications to an OSGi environment and then modularising them.

Virgo has had support for legacy code as one of the main design points from its inception. The Virgo kernel implements solutions to several knotty problems which crop up regularly when using non-OSGi libraries in an OSGi environment. These are assumptions about thread context class loading, load time weaving, class path scanning, and implicit package use.

So using implementations of OSGi Enterprise specifications is one part of the answer as is providing the kind of support for legacy libraries like that available in Virgo.

Sunday, September 26, 2010

Virgo and Ocean Observatories Initiative

Virgo seems to be finding its way into the scientific community. A research hospital and a networking research group already use Virgo heavily. It seems that an oceanography group are also considering Virgo for their "ioncore-java" runtime.

They are at the stage of comparing various application servers. The front-runners, at least in terms of their feature comparison table, are Virgo and WSO2. Fortunately, I believe the architecture and design of these products are quite different, so this should make the decision easier.

Virgo was built from the ground up as OSGi bundles and offers first class support for OSGi applications. The Virgo kernel is independent of any particular communications protocol. Tomcat is added as part of the Virgo web server which sits on top of the kernel. The whole of Virgo runs inside an OSGi container. Web services can be supported on Virgo, but it's not something Virgo has majored on as this is just one of the features available in the Spring portfolio running on Virgo.

WSO2 on the other hand is focussed on - and the clue's in the name here - web services. It is constructed as an interoperating network of web services components. WSO2 does support OSGi, but it hosts an OSGi container inside Tomcat.

Let's see what the Ocean Observatories Initiative regard as the key requirements for their component model. If it's OSGi components with primarily local communication (i.e. Java method call), then I reckon Virgo would be ideal. If on the other hand they want a primarily distributed component model (and it's a big if) with web services as the communication protocol, then WSO2 may be a good choice.

One last word. I felt a bit sorry for Glassfish as it didn't get a tick in the J2EE box even though it's the J2EE reference implementation. Also Tomcat is the servlet engine used by Virgo, so it deserves just as much of a J2EE tick in the box as Virgo. Similarly JBoss should get the J2EE tick.

Monday, July 12, 2010

Grails on Virgo

Wolfgang Schell recently released an OSGi plugin for Grails. This runs on Virgo but apparently uses Spring DM's web extender rather than Virgo's web support. This makes me wonder if it would be better run on the Virgo kernel to avoid loading Virgo's web support. Perhaps he'll go into more detail when he has experimented more with Virgo.

Sunday, July 11, 2010

RAP on Virgo

Owen Ou blogged about deploying RAP (Eclipse Rich Ajax Platform) on an OSGi based server like Virgo. He says it's extremely easy, which is good to hear, but I'd love to see a bit more detail. For instance, assuming he's actually doing it on Virgo, is the RAP infrastructure launched as a plan in the initial list of artifacts for the user region? How are RAP applications then deployed? Fascinating stuff...

[The link to Owen's blog broke. I guess he deleted the blog. Anyway, Florian Waibel has also blogged about RAP on Virgo.]

Tuesday, May 11, 2010

Virgo kernel checked in and ready for use

The Virgo kernel was checked in to Eclipse git earlier today and is ready for you to take it for a spin.



No, this isn't the Virgo kernel - it's a 5 Mb hard disk from 1956 which weighed over a ton. The Virgo kernel zip file would have needed a couple of tons of hard disk for storage. Thank goodness times have moved on.

Wednesday, May 05, 2010

IntelliJ IDEA Support for dm Server

IntelliJ just announced support for dm Server. Although I haven't tried using the support, it appears to be at a reasonable level of function and only a little behind the SpringSource Tools Suite (as far as dm Server support is concerned) with plans to fill the gap. IntelliJ haven't mentioned Virgo yet, but I'm hopeful that the Eclipse (RT) branding won't put them off.

This, in my opinion, is a sign of a healthy runtime: multiple vendors competing on tooling. I'd like to see the same on management tooling which could be implemented relatively easily layer to dm Server's JMX interface.

Friday, April 23, 2010

First Virgo components available for development

See this if you are interested in doing some Virgo development.

Special thanks go to the Eclipse IP team, especially Barb and Sharon, for making this possible at this stage.

Also thanks to Chris Frost for his help in raising CQs.

Thursday, April 01, 2010

Virgo contribution underway

I just finished entering the main CQs (Contribution Questionnaires) for Virgo. The corresponding tracking bugs, which are visible to everyone, are at the bottom of the list here.

There are now two gating factors before we can commit the code: the workload of the small but committed Eclipse legal team and my team's ability to field any issues they raise...



Once we've gotten the code committed, it will be possible to others to clone the git repositories, build, test, develop improvements, and submit patches. Essentially we'll be in business as far as contributors and build-it-yourself users are concerned.

Later, when we've also gotten the 3rd party dependencies through the legal process, we'll be able to start doing releases out of eclipse.org.

Friday, March 26, 2010

EclipseCon 2010 Virgo Presentation

Reflections on EclipseCon

EclipseCon has been a great conference - on a par with the Spring conferences for enthusiasm and quality of talks etc. There was an enthusiastic welcome to the Virgo project which made my attendance really worthwhile. Several people discussed possible contributions to the Virgo project and there were a number of useful chats with potential consumers of Virgo as well as committers on other EclipseRT components who are keen to see Virgo fully integrate with the rest of RT.

For me, the highlight was the Virgo BoF with a room full of people really interested in learning more about Virgo. I was able to give a bit more detail about the internals of Virgo, clarify the separation of the various components, and start to explain the concepts underpinning the kernel.

I'd like to congratulate the organisers for putting on such a great conference and I look forward with anticipation to future EclipseCons.

Saturday, March 20, 2010

Eclipse Virgo home page

Eclipse Virgo. What more can I say, other than "Thanks Chris"?

Wednesday, March 17, 2010

EclipseCon Sessions

EclipseCon starts in next week and there are plenty of sessions relevant to Virgo.

I'm doing the main Virgo talk and there's a Virgo launch BoF session (and a proposed BoF on Virgo tooling). I'm on panels on Gemini and the future of application servers and co-leading a BoF on application models for OSGi. I'm also giving a brief update on Virgo at the Eclipse Membership Meeting.

Then there's the main Gemini talk, an EclipseRT BoF, and other sessions on OSGi, Apache Aries, Maven and OSGi, and JPA and OSGi.

Wednesday, February 03, 2010

"Strong" optionality

Richard Hall tells me Felix interprets optional imports as "strong", to use my earlier term. I believe there is enough latitude in the OSGi spec to permit this. Equinox takes the opposite interpretation and can discard optional imports of available packages in order to overcome uses constraints which would otherwise prevent resolution.

Regardless of that, I think that strong optionality is more likely to be what the user wants.

Without strong optionality, if the user provides a bundle which can satisfy some optional imports, they are not likely to be pleased if the resolver discards corresponding optional imports as the functions provided by the bundle will then be unavailable. Better to fail fast with diagnostics that allow the user to sort out the uses constraints.

Even worse, without strong optionality resolution may fail after a protracted attempt to discard combinations of optional imports in order to get a valid wiring. All the optional imports could be discarded before embarking upon a protracted search to ensure that the search is not in vain. Perhaps Equinox already does this, but I'm doubtful it as it could be criticised as optimising for the failure case.

OSGi resolution is NP-Complete. So what?

At last we have a proof, thanks to Robert Dunne, that the OSGi R4 resolution problem is NP-Complete. He showed that it is possible to represent a solution to the NP-Complete boolean satisfaction problem 3-SAT as a wiring of a carefully constructed set of OSGi bundles. After the idea arose of using a SAT solver to construct an OSGi resolver, some of us have been discussing the possibility of a SAT-based proof. The hard step was how to represent a boolean OR in terms of resolution, which Robert has now provided.

NP-Completeness tallies with the experience of those who have written an OSGi R4 resolver: it takes about a fortnight of intensive hacking, combined with sleepless nights, to crack the problem followed by a much longer period of bug fixing and optimisation. Now we know why: NP-Complete problems are hard to solve efficiently.

I'm partly to blame as I led the spec work on OSGi RFC 79 which added considerable function, and with it complexity, to the resolution algorithm. I presented some of the background at the 2004 OSGi Congress. The RFC 79 spec was one of the more interesting that the OSGi Core Platform Expert Group has worked on. There was a tension between addressing the basic use cases and creating something that was too hard to implement.

Fortunately, we developed a prototype resolver in the Equinox incubator. This provided useful feedback to the spec work, particularly when features were dropped from the spec which made the implementation more tractable. The team which created the prototype included Simon Burns, Steve Poole, and Tom Watson. Simon wrote the initial R4 resolver. Steve created a rigorous model-based test based on a slow, but functionally corrrect, resolver of his own. Tom provided much advice on the R3 resolver and eventually integrated the new resolver into Equinox and has been improving it ever since.

So what features emerged from this process that made the resolver NP-Complete? Well, crucial to the proof are the uses constraint and, its evil twin, optional imports. Steve Powell observes that the proof does not depend on the transitivity of the uses constraint. (Other, arguably much more evil, features in R4, require-bundle and fragments, also did not contribute to the resolver's intrinsic complexity.)

This suggests some practical ways round the problem. Examples occasionally crop up when the time to resolve a set of bundles becomes unacceptable. How can we work around such issues if we are willing to put in the effort?

Firstly, uses constraints can be avoided by preventing types from one package leaking out through the interface of another package. This is an application of the Law of Demeter.

Secondly, in some cases it may be possible to remove, or at least cut down, optional imports by using mandatory imports instead and splitting the function which has those dependencies across bundles so that the user's dependencies determine which bundles are needed.

These approaches are feasible only for new code. Bundles generated from popular Java libraries tend to exhibit both uses constraints and optional imports.

Another possibility we've been considering is a "strong" form of optionality in which the optional import cannot be discarded if it is locally satisfiable, i.e. before uses constraints are applied. Certainly the current definition of optional import is critical to Robert's proof. The challenge is to convince ourselves that R4 resolution with strong optionality in place of the current definition is no longer NP-Complete or, conversely, to prove the opposite.

I'm undecided. Strong optionality drastically reduces the search space in some cases. On the other hand, I feel there may be a proof based on 3-SAT in which multiple bundles import, but do not export, packages.

Wednesday, January 27, 2010

OSGi standards and adoption

The TSS thread on Virgo and whether or not OSGi is ready for mainstream enterprise application development contains some misconceptions.

Firstly, that SpringSource is somehow not involved in the OSGi standards process.

That couldn't be further from the truth. Recently we've led the standardisation of Spring DM into the blueprint service specification and reference implementation while IBM provided the compliance tests. We are developing the reference implementation of the OSGi web container while IBM wrote the specification and are providing the compliance tests. Then, as we look to the future, we are leading two, and involved in other, new specifications targeted at R4.3.

Secondly, it is easy to get the impression that other vendors, such as IBM and Oracle, are about to solve the adoption problem.

I certainly hope they do, but, at least in the case of IBM's WebSphere Application Server OSGi alpha, their programming models seem very similar to that of dm Server and their development and build tools, where the real adoption issues seem to be, do not seem to be particularly advanced.

Perhaps IBM's large customer base and marketing machine will encourage developers to jump the necessary hurdles. But the tricky thing about a product like WAS is that, after the alpha code has been rolled into the main product, it will be very hard to measure real usage. The only real evidence will be customer testimonials and APAR (i.e. external defect) rates against the OSGi features, which IBM typically do not publish.

Let's see how Virgo gets on. At least the adoption should be easier to measure as users will almost certainly be using it for its OSGi features.

Friday, January 15, 2010

Building OSGi applications

The recent SD Times article on Virgo paints an overly bleak picture of the problems of building an OSGi application.

Build certainly complicates adopting OSGi, but many users are willing to overcome that problem in order to get the benefits of modularity and the service registry.

Also various tools are available to help. We developed the bundlor tool for automatically generating bundle manifests from the code in a bundle. Peter Kriens's bnd tool provides similar function.

People have successfully developed Maven and Ant builds for OSGi projects and various helper plugins. for instance PAX Construct, are starting to emerge.

Also, SD Times fails to mention the Enterprise Bundle Repository which already makes available (directly or via other repository front ends) about a thousand of the most popular open source Java components packaged as OSGi bundles with carefully constructed bundle manifests.

Tuesday, January 12, 2010

Eclipse Virgo project

The news just broke. SpringSource are donating the dm Server code base to eclipse.org as part of the EclipseRT umbrella project. This should be just the server runtime that EclipseRT has been looking for. The Virgo project proposal is available and more details are on the SpringSource blog.

Tuesday, December 08, 2009

WebSphere OSGi Application Model

As noted previously, WebSphere Application Server's alpha support for OSGi applications is similar to that of SpringSource dm Server. Now IBM's open source server, Geronimo, has taken its first steps to join the party.

Apparently Geronimo 3.0 is now based on OSGi, like all other modern Java app servers. (I wonder what vestigial traces of its GBean/XBean heritage remain?)

Geronimo's OSGi application model is provided by the Aries incubator, so I wonder if that is also the base of the WAS alpha, since Geronimo is essentially WAS CE.

Maybe the answer is that the closed source WAS is also based on Aries, but I didn't dare accept the license agreement on the download to find out.

Monday, December 07, 2009

SpringSource dm Server catching on

The recently announced alpha driver of IBM WebSphere Application Server v7 support for OSGi applications is covered in a brief dzone interview with Kirk Knoernschild of the Burton Group.

As Kirk notes in his blog, SpringSource dm Server has already brought OSGi to the enterprise. dm Server v1.0 shipped in September 2008 and v2.0 will enter RC1 in the next few weeks.

The WAS alpha has some features in common with dm Server, notably repositories and applications along similar lines to plans and PARs. Immitation is the sincerest form of flattery or, putting it another way, IBM has validated dm Server's approach.

I'm still waiting for technical details of WebLogic DM Server, but since it is microkernel based (unlike WAS) and with a name like that, it's likely to validate dm Server further.

Meanwhile GlassFish and JBoss support for OSGi is coming along nicely, so it'll be fascinating to see what kind of OSGi application models these projects end up providing.

Tuesday, June 16, 2009

Jigsaw/JSR 294 on JavaPosse

I listened to most of the JavaPosse interview with Mark Reinhold and Alex Buckley about Project Jigsaw and JSR 294. It's nice to see some technical details emerging and I can start to see why OpenJDK would prefer a different design for modularity to that of Apache Harmony.

I was moderately surprised to find that the design point is to load multiple Jigsaw modules with "local" dependencies into the same class loader. I wonder how many class loaders the JRE will end up having? There was even talk of putting all the bootstrap classes in a single class loader. So maybe the answer is most of the JRE code will be loaded by in the bootstrap class loader and subdivided using JSR 294 constructs. Time will no doubt tell.

For the record, I hope I've never claimed that Apache Harmony has shipped runtime modularity. The Harmony class libraries are divided into bundles with OSGi metadata. A prototype that enforced those class loader boundaries in the JVM was built, but never shipped. It had a rather different design for bootstrap class loading to that being proposed for OpenJDK, but I suppose Harmony had the benefit of having modularity in mind from the start.

Saturday, June 13, 2009

OSGi, Scala, maven, and git

I've noticed a few people are integrating OSGi and Scala. In particular Heiko Seeberger is building OSGi bundles for the Scala language plus various utility modules and DSLs. Then he's using maven for build and git for version control.

I am, of course, a fan of OSGi. I'm rapidly warming to git. But I'm not sure Scala and maven really solve more problems than they introduce. I don't want to dwell on maven as I'm not sure other build systems are really much better, but Scala does raise lots of question in my mind.

Many years ago, I was keen to adopt new technology and some combinations worked really well while others just seemed to multiply the overall complexity. One particular combination was the use of Z specification language embedded using literate programming in a subset of our target programming language with proof rules. It was possible to create proof trees of simple specifications refined to the target language. The main problem was that there were very few people who could acquire the skills necessary to master the combined technology.

Well, perhaps that was an extreme example and maybe it was doomed from the start, but I have a niggling doubt about the size of the Scala language.

So I'd be delighted to understand what features of OSGi and Scala work well together (and what features are simply best not used as part of this combination). In particular, are there any particular problems which are significantly simplified by combining OSGi and Scala which cannot be attacked very well using OSGi (and Java) or Scala alone?

Postscript

This document partially answers my questions. Scala makes it easy to write a DSL for handling OSGi services simply.

This is similar to the kind of function provided by Spring DM (or, equivalently, the OSGi Blueprint service).

The main difference is that there is still some minimal clutter in the Scala program whereas with Spring DM POJOs are configured in XML to provide and use OSGi services. I suppose the Scala approach favours sophisticated use of OSGi services, but this is not the norm in typical enterprise applications.

Friday, June 12, 2009

Coding and hill-walking

I've recently been re-organising the internals of the SpringSource dm Kernel so that it can be extended to provide structured install artefacts known as "plans". The current code has been extended to provide a plan equivalent to the existing PAR files, but other variants of plans, unscoped and non-atomic, cannot be implemented without the necessary data structures being put in place.

From early in the development of dm Server 1.0, the team was using several terms which summarised the behaviour of the kernel, but which were not easily identified in the code. Terms such as "scope" and "deployment pipeline". As we started thinking about 2.0, we identified "plan", "install tree", and "install environment" as basic concepts, even if the latter was pretty fluffy.

Scopes were implemented as first class types quite a while ago now and were crucial to being able to make cloning work. One of the trickiest areas was performing operations on types which could be cloned or non-cloned. Explicit scopes enabled these operations to be coded to a non-cloned interface type and re-dispatched as necessary to the appropriate cloned type.

Recently, I've been turning deployment pipeline, install tree, and install environment into first class types. I really love this kind of work. It's rather like walking through a range of hills where the higher peaks are only visible when some of the lower summits are behind you.

Sometimes however things get much more strenuous. It's like the terrain is getting much steeper and there no obvious path to follow.

This afternoon I was struggling to structure the code to handle the starting of bundles. There's an interesting coordination and state management issue because starting a bundle has a synchronous phase defined by OSGi and often an asynchronous phase defined by Spring DM or the OSGi Blueprint Service. I made some progress, but I'm still not happy with the current code and it certainly isn't easily unit tested which is a good measure of code quality.

At this point, the only option is to take a break and come back to the problem with a clear head. I tried consulting colleagues and that helped quite a bit, but there are hopefully some really good abstractions waiting to be discovered.

I think the root cause of the complexity of this small area is that it is fairly low level and we're lacking some really good ways of thinking about the asynchronous processes involved. Occasionally, modelling processes in CSP and exploring the properties of the model using a model checker such as FDR has helped. But more often than not, the best approach seems to be iteratively cleaning up a bunch of code until a good structure starts to emerge.

In the worst case, I'll make the code as clean as possible and add it to the list of areas where future discoveries will hopefully make things cleaner. But I'm hoping for a much better outcome.

Well, that's enough for now. I put this blog together as I expect other people have similar experiences when working in complex areas of software and it's good to learn from each other. I'd be interested in your perspective and how you approach these kinds of activities.

I'll leave you with a hill-walking illustration of my current position: standing on one peak trying to make out the next one in the mist.

Tuesday, April 28, 2009

CaseInsensitiveMap<V>

I recently wrote some utility code which we'll be using to process bundle manifests, typically to map header names to typed values. It was surprising that this code was neither available elsewhere nor completely trivial to implement.

So what was it? A map with String keys which is insensitive to the case of the keys and which preserves the case of the keys. Apache Commons has a CaseInsensitiveMap which whacks the keys to lower case on input. I was tempted to use that code, but in our environment, it's important to preserve the case of keys.

For example, the bundlor tool will eventually use this code to create bundle manifests from templates. If the user carefully spells headers in the template with mixed case, e.g. "Export-Package", they would probably be disappointed to see the headers converted to lower case in the generated manifest.

Implementing an (Apache licensed) case-preserving CaseInsensitiveMap was an interesting exercise. I saved some time by extending AbstractMap and AbstractSet (for the key and entry sets). I used an instance of ConcurrentHashMap to maintain the state. I exposed one internal type to be package visible so I could get 100% unit test coverage and made some inner classes static to stop findbugs complaining.

Why blog about this? Simply to increase your chances of finding the class if you need it.

Thursday, April 09, 2009

SpringSource dm Server 2.0 M1

As in Rob's blog, we recently shipped the first milestone of dm Server v2.0. Apart from some new function, this marks an opening up of the dm Server development process with publicly accessible subversion repositories and issue tracker.

The 2.0 roadmap is pretty exciting, and includes constructing a Reference Implementation for RFC 66 ("web container for OSGi") from the existing web components of dm Server.

I'm spending most of my time down in the dm Kernel implementing "cloning" which, roughly speaking, enables a bundle to be wired more than one way, which is not normally possible in OSGi. Cloning copies bundles into an application's scope. This enables the OSGi resolver to have another go at wiring the dependencies of the cloned bundles in a way which suits the application. More information on cloning is included in the above blog entries.

Application scopes can be thought of as an alternative to "nested" OSGi frameworks, which are being specified in the OSGi Alliance and implemented in Equinox. The contents of an application scope can "see" all the contents of the global scope which are not "shadowed" in the application scope. The contents of a nested framework can, by default, not see any of the content of the parent framework(s) unless the nested framework is explicitly defined to see some of the parents' contents.

Much of what we're learning about cloning into application scopes, such as how to calculate what needs cloning and how to manage cloned extenders, will also apply to cloning into nested frameworks if and when that becomes necessary.

Wednesday, April 08, 2009

Is JDK 7 Java?

I wonder if the JCP rules allow JDK 7 to be called "Java"? Sun appears to think so since the JDK 7 page is entitled "jdk7: Java SE 7". Apache Harmony meanwhile will presumably avoid calling itself "Java" until it can obtain and pass the JCK. Does this make Harmony the true guardian of Java? ;-)

Thursday, March 26, 2009

Progress on JSR 294

JSR 294 seems to be underway again. BJ Hargrave describes an EG discussion of visibility and access control which I found enlightening.

Tuesday, December 09, 2008

Project Jigsaw

Mark Reinhold recently announced Project Jigsaw for modularising the JDK. Could this be a step towards the dream solution? I can hope.

Meanwhile JSR 277 is deferred beyond Java 7. Joy.

Thursday, November 06, 2008

Refactoring and Structure101

I'm in the middle of a fairly broad refactoring of the internals of the SpringSource dm Server to introduce an interface to some of the subcomponents and ensure that the rest of the code accesses those subcomponents via the interface rather than directly.

The Structure101 tool has proved invaluable in this process. It has excellent support for analysing the dependencies between components and identifying anomalous dependencies such as those from lower layers of the code to higher layers. But it also supports transformations and visibility settings which turn out to be extremely useful for refactoring.

I created transformations to collect together packages which I want to treat as a unit from the point of view of the current refactoring. This would not be possible using standard refactoring techniques as the packages in question cannot actually be renamed, for reasons I won't go into here. The transformations were of the form:

org.foo.* -> myunit.org.foo.*
org.bar.* -> myunit.org.bar.*

The transformations apply only in Structure101's model of the system and do not affect the actual code. The net is that I can then analyse dependencies on the myunit package hierarchy using Structure101's default mechanisms.

Visibility settings come in useful in identifying the pieces of code I need to refactor. By modifying the Structure101 model so that myunit is a private subcomponent of the new interface component, all dependencies on myunit from outside of the interface component show up as anomalies.

I can then use these anomalies as a 'to do' list for my refactoring. Clicking on one of them shows the precise code dependency which I can refactor by extending the interface component, if necessary, and then changing the using code to depend on the interface component. Refreshing the Structure101 model takes a few seconds after which there is one less anomaly showing on the diagram.

I've really appreciated the support from the Structure101 developers who have helped me get the best out of the tool. It sure beats some combination of grep, a cross-referencing tool like OpenGrok, plus a ton of yellow stickies...

Monday, June 09, 2008

100% bloat free

Here's the image from the T-shirts that were given away at JavaOne. Click the image to enlarge it.

Wednesday, April 30, 2008

SpringSource Application Platform ships

Today the SpringSource Application Platform shipped its first beta release. Version 1.0 is essentially a combination of Spring, OSGi and Tomcat engineered into a stand-alone runtime.

The programming model is Spring with OSGi and the option of a new packaging format: the platform archive or PAR file. A PAR file comprises multiple OSGi bundles. This brings OSGi modularity and versioning to enterprise applications.

The diagram below gives a high level view of the structure of the platform.

So what does the platform deliver over and above a standard OSGi container? Well, apart from the servlet function, integrated Spring support, the modular kernel-based runtime, and the full support for diagnostics and first failure data capture, there are quite a few features that are of interest to an OSGi audience:

  • The platform has a repository which automatically provisions dependencies as bundles are installed into OSGi. This repository stores the platform's own bundles as well as application and third party bundles.
  • In addition to the PAR mentioned above, there is a new concept of a library which is a named, versioned collection of OSGi bundles installed in the repository typically for use by applications. This is accessed via a new manifest header, import-library, which imports all the packages exported by the bundles of a library.
  • Another new manifest header, import-bundle, imports all the packages exported by a single bundle. This avoids the need to wrap components consisting of a single bundle in a library.
  • There is also some novel diagnostic help when a bundle cannot be resolved. The platform's resolution failure detective attempts to extract a concise summary of the problem and the most likely root causes.
  • Associated with the platform is a carefully constructed repository of commonly used bundles: the SpringSource Enterprise Bundle Repository known informally as the "bundle repository in the sky", or "BRITS". The bundles have validated OSGi manifests and the whole repository has been checked to make sure that its bundles can be used together.
Finally, I can't resist saying a bit about the architecture of the platform. In the past, I've been involved in restructuring projects - adding in modularity after the fact. The platform was built in a modular fashion from day 1.

The whole platform consists of subsystems which are similar in concept to libraries but which are managed specially by the kernel. Each subsystem is a named, versioned collection of bundles. The kernel manages the installation and starting of subsystems as the platform initialises. The kernel manages the lifecycle of each subsystem and its bundles, even reacting to unsolicited lifecycle events of the bundles of a subsystem to keep the state of the subsystem consistent.

Subsystems are used to control the function of the platform. Some subsystems are always present, but others are optional and are configured in a profile. This mechanism allows the precise function of the platform to be configured without rebuilding. For instance, the shipped profile.config file (in the config directory) looks like this:

/*
* SpringSource Application Platform profile manager default
* configuration file.
*/
{
"profile": {
"version" : 1.0,
"name" : "web",
"subsystems" : ["com.springsource.platform.servlet",
"com.springsource.platform.web"]
}
}
which shows the servlet container and web personality subsystems.

My colleague Rob Harrop just blogged about the platform if you want to know more. Why not register, download the binary, have a play, and tell us what you think.

Tuesday, April 15, 2008

OSGi could be great for JSR 277

Bryan Atsatt describes a plausible way forward for JSR 277. I agree with his reasoning, although OSGi is not only the de facto standard for Java modularity, it's a proper standard too.

Bryan was one of the key movers in the Expert Group when I was involved. In fact, he proposed a solution to JSR 294 ages ago which is close, if not identical, to the approach Alex Buckley is now considering.

Welcome to the blogosphere, Bryan!

Projects

OSGi (130) Virgo (59) Eclipse (10) Equinox (9) dm Server (8) Felix (4) WebSphere (3) Aries (2) GlassFish (2) JBoss (1) Newton (1) WebLogic (1)