From the beginning, I wanted Nexus to be a collaborative visualization system allowing multiple clients in multiple locations to see the same visualizations in real-time. The issue that arises here is knowing "where" in the 3D semantic web visualization the other clients (people/avatars) are and what direction they are looking at. In the 3D digital world, you have the concept of a "camera". This is essentially your point-of-view in a particular 3D simulation. As the camera moves, your view of the model changes as well. In order to know where the other clients are in the simulation, the camera position and rotation data on all clients are converted to RDF triples and then sent to the Nexus server to be resent and synchronized to all other clients. Nexus eats, breathes, and internalizes everything as RDF. HTTP polling would not work well as a transport for these triples, especially with a dozen or more clients all trying to sychronize with each other. The solution is sending the RDF N-Triples using the HTML5 WebSocket protocol.
What are WebSockets? The WebSocket protocol is a bi-directional, full-duplex communications protocol that is part of the HTML5 specification. WebSockets allow my WebGL clients to talk back and forth with the Nexus server without resorting to http polling. I will be adding WebSockets to my OpenSimulator client as well.
I've embedded Jetty in Nexus so Apache Tomcat is no longer necessary to run Nexus which simplifies the deployment of the Nexus server software. Jetty also has a nice clean HTML5 WebSockets implementation and allows me to do both http and WebSockets on the same ip and port. Nexus client/server communications are all just streams of RDF triples going in both directions using the HTML5 WebSockets protocol.
Here is my poster for my 2011 Gordon Conference on Visualization in Science and Education that I did a couple weeks ago where I presented the progress so far on Nexus.