Semantic Web

Haylyn - Viewing the Semantic Web Through the Oculus Rift

A couple of days ago, my developer version of the Oculus Rift Virtual Reality (VR) headset arrived.  It came in a very slick plastic case and hooked up to my computer in minutes.  I installed the software and ran the demonstration "world" and stuck my head into the Oculus Rift - Amazing!  The quality and immersive sense was better than I thought it would be.  A couple of years ago, I had ordered another virtual reality (VR) headset from a different company and, well, after five minutes of playing with that one, I shipped it back.  It was like looking at the screen down a five foot tunnel.  It had head tracking and it was neat for a few minutes, but I was never going to use it in any practical sense.  So, I shipped it back.

The Oculus Rift does not suffer from this tunnel effect!  Definately a keeper.  I wanted to interface the Oculus Rift with my Haylyn Project (formerly known as Nexus.  I'll go into the reasons for the name change some other time).  In the past couple of months, I had re-written the Haylyn WebGL/HTML5 client and changed the WebGL libraries from GLGE to Three.js.  GLGE was excellent to work with, but the project has been inactive for too long for my comfort and Three.js has a fairly active user and developer base.  The question now is how to access the Oculus Rift from within the browser?  After a little bit of searching, I downloaded and installed the vr.js code by Ben Vanik which includes a NPAPI plugin that works with Chrome and Firefox.  Ben Vanik provides several demos for using the vr.js with the Oculus Rift including a Three.js version.  The image in this article shows the dual screen effect seen without the Oculus Rift Headset on.  The image depicts the aggregated http traffic to my VIVO site as linked data that I did as a poster at the 4th VIVO National Conference last week.  See poster here.

With the Oculus Rift Headset, graphs and other visuals can be seen in 3D and with the added bonus of head-tracking, you can immerse yourself in a world of data and look up, down, left, right, diagonally, etc just by moving and turning your head.  Haylyn WASD functionality allows you to move around the scene of RDF linked data.  The "endless plane" and cubes in the image are objects that I added from the vr.js demo.  They looked good but have nothing to do with the colored RDF graph.  The vr.js libraries also work with Sixense's Razer Hydra which is a motion capture system for your hands.  Sixense is also working on a wireless version of the Hydra called the STEM System.  I can't imagine the mouse and keyboard being the pinnacle of computer/human interface technology.  Hey, W3C Device APIs Group, can we add the Rift?! The thought of being able to reach into the 3D scene and grab and manipulate triples and data with my hands...Some distractions are too cool to ignore...must...order.....Squirrel!



Nexus Project: WebGL Client/Server Communications Test using RDF over WebSockets

 A video from a July 25, 2011 showing the first successful test run of my Nexus Project's WebGL client/server communications using HTML5 WebSockets rather than http polling.  This visualization shows Friend of a Friend (FOAF) RDF graph data being displayed in 3D as it's layout is being determined by a 3D force-directed layout algorithm.  I got tired of digging up the video on my iPhone to show people so I decided to post it.  There have been many things done since this video (latest browser support, Jetty 8, GLGE 0.9, speed improvements, and better screen capture than my iPhone too ;-)  I have been considering a different RDF serialization rather than N-TRIPLES since N-TRIPLES since it is hopelessly uncompressed, but it made for the easiest implementation since N-TRIPLE parsers are easy to write in javascript.  Jena also supports N-TRIPLES serialization so nothing had to be done on the server end of things.  I was just at the ISWC 2012 in Boston and it was suggested to me to use Turtle RDF (I was also considering JSON-LD or even the binary RDF format), but honestly, the speed of N-TRIPLES is sufficient for now and I would rather work towards a first release of the software.  It's too aluring to endlessly tinker (and I love to tinker by the way).


Building HTML5 pages one RDF Triple at a Time

Alright, it doesn't look like much, but there is something special about the web dialog that is being displayed over the 3D graph - it was built triple by triple.  I had reached the point in developing Nexus where I needed to have interactive dialogs for display control and user-input.  Up until this point, I had been working with 3D objects and taking SPARQL commands via a text box at the bottom of the screen, but easier control over visualizations was needed.  Nexus client/server communications works by passing the RDF data that represents the 3D visual displays to the client from the server as RDF over the HTML5 WebSockets protocol with the client sending back any responses to the server back over the same bi-directional WebSockets connection.

But how to handle HTML5 web dialogs?  Also, in keeping with one of the Nexus design principals "it must be collaborative", how would I keep HTML5 dialogs synchronized between multiple clients?  Another Nexus design principal is "it must be RDF".  If you think about it, HTML, CSS, SVG, and RDF are all just data in the end, but four different styles of data formats.  If we have RDF, why not just make them all RDF?  I had looked around and people had asked the question of how to represent actual HTML pages as RDF years ago, but they only comprehensive work I could find on the subject, is TopQuadrant's SPARQL Web Pages. (  TopQuadrant represents SPARQL (as SPIN - SPARQL as RDF) and HTML/CSS as RDF and then processes the combination of RDF data server-side to render HTML pages (in the same fashion as JSP, ASP, PHP) and then passes the rendered HTML/CSS pages to the client.  TopQuadrant even has RDF ontologies to represent HTML 4.01, CSS, and SVG Tiny1.2.

But, I wanted to put a slightly different "spin" on this.  ;-)  I decided I wanted to take the web pages represented as RDF on the server and then pass that data to the client as RDF over the WebSockets and then render the web pages (or fragments) triple-by-triple using the HTML Document Object Model (DOM) client-side.  I created my own HTML/CSS ontologies since TopQuadrant's is weighted towards their SPIN technology.  The above image is a 3D depiction of Tim Berner-Lee's FOAF file as a 3D graph.  The black dialog was built by executing the following SPARQL query against the graph to determine the number of each type of predicate used in the RDF graph.  This is a precursor dialog to allow a user to remove uninformative predicates from the view (for example removing triples that represent gender when you know you are looking at a same-sex population of people).

SPARQL Query to determine numbers of different predicates:
select distinct ?p (count(?p) as ?count)
where {graph ?g {?s ?p ?o}}
group by ?p
order by desc(?count)

After the execution of this SPARQL query, I follow the same model as I do in the 3D visualizations.  You never see your actual data, you see something that is a visual representation of your data.  By having this visualization layer over the actual data, it allows for multiple visualizations of the same thing at the same time.  It also allows for an amazing level of flexibility since visual data can be manipulated by a variety of SPARQL update queries against a combination of the visual data and the actual data (both being represented as RDF). The results are wrapped in HTML/CSS (an RDF version of it) which then creates the final RDF graph which can be viewed in this link (HTML/CSS as RDF).  This is the actual data that is sent over the WebSockets connection to build the dialog.  The client then reads the data triple by triple and then creates the HTML fragment (wrapped in a DIV) and then adds it to the existing web page.  Each row of the table in RDF has a <nex:clickable> set to true on it.  What is this for?  It tells the client to attach an onclick handler to the row.  All rows have a unique ID set.  The onclick handler simply sends a single triple back to the server to indicate which object (row) has been clicked.  For example:

<> <html:event> <html:hasBeenClicked>

It is then up to the server to determine how to respond to the end-user's click.  For what I am working on, at the moment, is that the background color will be set to a "highlight" color which will take a single triple to the client to do.  Keep in mind that another client may be attached to this visualization at the same time and that same triple can also be sent to keep that client's version of the dialog in synch with the client who did the clicking.  There is also no reason why multiple clients cannot click on the same dialog at the same time and on different rows (but on different browsers) for everyone to pick what they want to select.  Singular triple changes allows for alot of flexibility, as well as, a "AJAX-like" experience since no screen refreshes are required on any of the concurrent clients.

Other updates since last blog post:

1) The force-directed layout engine as been re-written to take advantage of multiple cores.  This has greatly accelerated layout computations since I work on a 6-core computer most of the time.

2) The "classic" RDF reification model which allows Nexus to make statements about specific triple's literals and specific triple's predicates has been replaced with a Named Graph version of RDF reification.  The notation is much easier to work with than using rdf:Statement, rdf:subject, rdf:predicate, rdf:object styled reification.  This was done after the newer Apache Jena library was added to Nexus.

3) Creation of a threaded 3D visualization "controller" - the earlier version of Nexus worked SPARQL command by SPARQL command.  In the new version, it can process multiple triples in batches, as well as, run layouts in a separate thread(s) which also enables incremental updates to the current visualization.  Essentially, a user can watch a graph layout occur as it happens.  This has been very helpful in debugging since I can see live what is actually happening.

4) XY/XZ client rotations and zooming with the ablity to alt-click a new center of rotation to allow "camming" through the 3D simulations.  Fun with quaternions!  (

5) COLLADA duck avatar representation of different clients.  This is needed to let a client know where other clients are collaboratively "looking" in the 3D visualization.  The duck was less boring than a ball or a cube or an arrow. And it makes my kids laugh.  :-)



RDF Triples over HTML5 WebSockets

From the beginning, I wanted Nexus to be a collaborative visualization system allowing multiple clients in multiple locations to see the same visualizations in real-time.  The issue that arises here is knowing "where" in the 3D semantic web visualization the other clients (people/avatars) are and what direction they are looking at.  In the 3D digital world, you have the concept of a "camera".  This is essentially your point-of-view in a particular 3D simulation.  As the camera moves, your view of the model changes as well.  In order to know where the other clients are in the simulation, the camera position and rotation data on all clients are converted to RDF triples and then sent to the Nexus server to be resent and synchronized to all other clients.  Nexus eats, breathes, and internalizes everything as RDF.  HTTP polling would not work well as a transport for these triples, especially with a dozen or more clients all trying to sychronize with each other.  The solution is sending the RDF N-Triples using the HTML5 WebSocket protocol. 

What are WebSockets?  The WebSocket protocol is a bi-directional, full-duplex communications protocol that is part of the HTML5 specification.  WebSockets allow my WebGL clients to talk back and forth with the Nexus server without resorting to http polling.  I will be adding WebSockets to my OpenSimulator client as well.

I've embedded Jetty in Nexus so Apache Tomcat is no longer necessary to run Nexus which simplifies the deployment of the Nexus server software.  Jetty also has a nice clean HTML5 WebSockets implementation and allows me to do both http and WebSockets on the same ip and port.  Nexus client/server communications are all just streams of RDF triples going in both directions using the HTML5 WebSockets protocol.

Here is my poster for my 2011 Gordon Conference on Visualization in Science and Education that I did a couple weeks ago where I presented the progress so far on Nexus.



Nexus WebGL 3D RDF client in Technicolor

It took less time than I thought it would, but here is an updated version of the 3D FOAF graph from my last posting with node sizes determined by the log base 10 of the number of links into a particular node.  The coulombs law for the larger nodes is adjusted so that larger nodes "push" out harder to accomodate the larger spheres preventing sphere clashes.  This images was taken with the WebGL running in Chrome.

Next on the agenda for additional functionality is the actual display of text labels over subjects, predicates, and objects.  Also to be added is WebGL camera and avatar positioning data.  What's this?  In the Opensimulator client, dozens of people can view and interact with the same RDF model/structure.  Where one of those people are looking or focusing their attention is indicated by their 3D cursor or avatar.  However, this leaves the WebGL client users in the dark as to what the OpenSimulator users and/or other WebGL clients are doing in the simulation.  I am planning to synchronize this information between all of the clients by streaming the avatar (or camera position data in the case of WebGL) back to the Nexus server where it will be pushed out to all clients in the form of more RDF triples.

The SPARQL commands for the colors and such for this image are as follows:

1) Make everything blue
insert {?rnode <nex:color> "0,0,1"} where {?node <nex:rnode> ?rnode}
insert {?pnode <nex:color> "0,0,1"} where {?node <nex:pnode> ?pnode}

2) Color white all literals
insert {?lnode <nex:color> "1,1,1"} where {?node <nex:lnode> ?lnode}

3) Color red all triples that are of foaf:knows
modify delete {?rnode <nex:color> "0,0,1"} insert {?rnode <nex:color> "1,0,0"}  where {?node <nex:rnode> ?rnode . ?node foaf:knows ?o }
modify delete {?pnode <nex:color> "0,0,1"} insert {?pnode <nex:color> "1,0,0"}  where {?node <nex:pnode> ?pnode . ?node rdf:predicate foaf:knows }

4) color green all triples of type rdf:type
modify delete {?rnode <nex:color> "0,0,1"} insert {?rnode <nex:color> "0,1,0"}  where {?node <nex:rnode> ?rnode . ?node rdf:type ?o }
modify delete {?pnode <nex:color> "0,0,1"} insert {?pnode <nex:color> "0,1,0"}  where {?node <nex:pnode> ?pnode . ?node rdf:predicate rdf:type }

5) Make everything shiny
insert {?rnode <nex:shiny> "3"} where {?node <nex:rnode> ?rnode}
insert {?pnode <nex:shiny> "3"} where {?node <nex:pnode> ?pnode}
insert {?lnode <nex:shiny> "3"} where {?node <nex:lnode> ?lnode}

Yes, I am planning on coming up with a far easier interface for the user other than SPARQL. :-)


SPARQL 1.1 Controlled 3D RDF Visualization - from a Force-Directed Layout to a Molecular Visualization of DNA using Nexus in OpenSimulator

Nexus is an experiment with Semantic Web RDF data visualized in three dimensiodians that can be done collaboratively amongst many people (and concurrently) at disparate locations.  Nexus also acts as a platform to try out various design ideas, technologies, and methodologies.  The original Nexus design read and displayed RDF data and could also export it.  I have reworked the back-end of Nexus to use RDF internally and to communicate with its front-end client(s) in pure N-Triples.  The internal RDF representation enables the use of SPARQL (the query language for RDF) via Jena ARQ to manipulate the RDF graph and thus the over-all visualization.  In this posting, I will show the SPARQL 1.1 commands used to manipulate the structural data of a strand of DNA that has been converted to RDF from the original PDB format.  The resulting display will be shown as a force-directed layout and then manipulated into a physical RDF layout determined by crystal structure coordinates contained within the RDF.  Essentially, this will allow for molecular visualization within Nexus allowing us to actually see the strand of DNA in a physical form.

Basic Visualization Design Concepts in Nexus
The basic unit of information we want to visualize is the RDF triple:

Subject - Predicate - Object

In keeping with the "pure RDF" concept, this triple would be annotated with a RDF triples using a display ontology designed for Nexus, it's prefix being "nex".  Statements like nex:color, nex:xyz, nex:glow, nex:nodesize could be made about any resource whether it be subject or object.  For each resource, a "display node" triple is introduced and attached to the original rdf resource. RDF nex statements would then be made about that display node.  For example:

?s ?p ?o
?s nex:rnode ?displaynode
?displaynode nex:color "1,0,0" (red)
?displaynode nex:xyz "2.34,7.34,1.23"e
?displaynode rdf:type nex:sphere
?displaynode nex:radius "3.4"
    and so on.....

Adding this "display node" layer added a large degree of flexibility for RDF displays.  At one point, the display nodes were represented as blank nodes, but in the current version of Nexus, I converted these to resources.  It was just easier to work with in this way.

Fun with RDF Reification
Visualization nodes cannot be attached directly to predicates and literals because RDF statements cannot be made about predicates or literals.  You can only make RDF statements about resources.  However, you can make statements about statements through a process known as RDF reification.  The triples for a single reified statement for nexus would look as follows:

?s ?p ?literal  (statement to be visualized)

The following RDF statements attach a display node to the predicate (?p) and literal (?literal)

?viznode rdf:type rdf:Statement
?viznode rdf:subject ?s
?viznode rdf:predicate ?p
?viznode rdf:object ?literal
?viznode nex:pnode ?displaynode
?viznode nex:lnode ?displaynode

No, RDF reification is not pretty, I know, I'm not a fan of the syntax, but it does allow you to make statements about other statements, and in my case, be used to make indirect statements about specific predicates and literals without having to modify any of the ontologies or resorting to use named graphs (not that this method is bad, I just haven't thought about it yet much).  So, at this point, we have three kinds of display nodes - rnodes (for resources), pnodes (for predicates), and lnodes (for literals).  These three types are actually all the same, but by assigning them different names it is easier to distinguish them from each other when querying the RDF.  This could have been done with a rdf:type statment but this was a bit more compact.  I may or may not change it later.  The W3C RDF working group had a recent discussion of whether RDF reification should be deprecated (see here).  I think the functionality of reification is needed, I just think it's syntax and design need to be re-worked.  For now, it is enabling me to do my arbitrary 3D visualizations.

OpenSimulator Object References
Rather than relying upon OpenSimulator's inventory mechanism and object ID system, objects are stored as RDF and assigned dereferencable RDF URI's which allow the objects to be accessed from remote OpenSimulator regions via the Nexus server code/triple store.  This will allow multiple regions (even if on different grids) to access concurrently the same RDF visualization.  The same RDF URI method could be used as a universal reference to refer to OpenSimulator users and groups (as well as the objects), RDF data interchange between OpenSimulator regions could also be quite handy, but that's another project for another day... :-)  For now, we'll see how well it works within Nexus.

Laying out the RDF Graph
Nexus implements a basic force-directed layout algorithm where the force of the nodes are modeled with Coulomb's Law and the predicates are modeled as springs with Hooke's Law.  When applying the force-layout to the loaded RDF graph (and this can be any RDF graph), the Nexus triples are ignored.  Later down the road, I would like to experiment with various modifications of the force-directed method and/or different methods all together.  I still have a bit of work to do on Nexus force-directed layout engine so that the results are more usable.

Sending the back-end RDF model to the front-end for visualization
The purpose of the front-end is to render the visualization nodes.  The RDF is pulled from the back-end using http and is sent purely as RDF N-Triples.  In an earlier version of Nexus, this was mostly RDF, now it is purely RDF.  When commands are needed to instruct the front-end to do things, the commands are sent as RDF triples.  For example, if I want the front-end to redraw the model, the back-end sends a triple about the session to the front-end as follows:

<> <nex:redraw> "true"  (an example)

Turning RDF into DNANexus - Semantic DNA
Back when I attended CSHALS 2010, I had started to write a pdb ontology to express PDB as RDF but shelved it to work on the core of Nexus.  No one else had an RDF representation of PDB that I could find.  Periodically, I checked, and finally during the summer of 2010 I discovered that the Michel Dumontier Lab had written a conversion for pdb and made the conversion program available (pdb2rdf).  And there was rejoicing in the streets!  I had a program now that could do the pdb (protein databank format) to RDF conversion.  The pdb file converted resulted in 16,473 triples.  It doesn't look like pdb2rdf transfers the bonding/connect information in the pdb files yet, so I'm limited to space-filled at the moment.  When the bond information gets added to the RDF conversion, I will be able to do ball and stick views as well.

Now, in order to turn the force directed graph into a visualization of DNA as seen in the first figure to that of the second figure, we would issue the following SPARQL 1.1 commands:

Step #1 - Set all display nodes visible property to false.  The nex:visible predicate tells the server whether to include that visualization node in the final display or whether to even consider it in the layout routines.

modify delete {?s <nex:visible> ?o} insert {?s <nex:visible> "0"} where {?s <nex:visible> ?o}

Step #2 - Set all display nodes visible property attached to atom nodes to true.  We use the predicate "pdb:hasSpatialLocation" to select atoms nodes since the atom nodes are the only nodes that have a spatial location.

modify delete {?rnode <nex:visible> ?o} insert {?rnode <nex:visible> "1"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?rnode <nex:visible> ?o}

Step #3 - We now change the coordinates of the force-directed determined atoms (their visualization nodes) to the crystal-determined XYZ location by reconstructing a vector from the XYZ triples.

modify delete {?rnode <nex:xyz> ?o} insert {?rnode <nex:xyz> ?xyz} where {?atom <nex:rnode> ?rnode . ?rnode <nex:xyz> ?o . ?atom pdb:hasSpatialLocation ?loc . ?loc pdb:hasXCoordinate ?xc . ?loc pdb:hasYCoordinate ?yc . ?loc pdb:hasZCoordinate ?zc. ?xc pdb:hasValue ?x . ?yc pdb:hasValue ?y . ?zc pdb:hasValue ?z . let (?xyz := fn:concat(?x,",",?y,",",?z)) }

Step #4 - The following series of commands sets the nodesize (radius) of the atom visualization nodes to the values that represent the actual atomic radii of the various types of atom present in the structure.  If this data was entered into the system as RDF triples, these six commands could be reduced to one.

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "1.0"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:HydrogenAtom . ?rnode <nex:nodesize> ?o}

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "2.8"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:CarbonAtom . ?rnode <nex:nodesize> ?o}

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "2.6"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:NitrogenAtom . ?rnode <nex:nodesize> ?o}

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "3.4"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:PhosphorusAtom . ?rnode <nex:nodesize> ?o}

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "2.4"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:OxygenAtom . ?rnode <nex:nodesize> ?o}

modify delete {?rnode <nex:nodesize> ?o} insert {?rnode <nex:nodesize> "3.2"} where {?atom <nex:rnode> ?rnode . ?atom rdf:type pdb:SufurousAtom . ?rnode <nex:nodesize> ?o}

Step #5 - Now for a little flair, we set the shininess of the atom visualization nodes to a glossy metallic value, again, using the "hasSpatialLocation" predicate to pick out the atom nodes.

insert {?rnode <nex:shiny> "3"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc}

Step #6 - We now color all atom visualization nodes to blue

insert {?rnode <nex:color> "0,0,1"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc}

Step #7 - The next five commands color the backbone of the DNA green by selecting atom nodes with a name in the form *' and *'', the backbone atom labels are traditionally labeled with apostrophe and double apostrophe. The last three commands handle the phosphates.

modify delete {?rnode <nex:color> ?o} insert {?rnode <nex:color> "0,1,0"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . ?rnode <nex:color> ?o . filter regex (?name, "''")}

modify delete {?rnode <nex:color> ?o} insert {?rnode <nex:color> "0,1,0"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . ?rnode <nex:color> ?o . filter regex (?name, "'")}

modify delete {?rnode <nex:color> ?o} insert {?rnode <nex:color> "0,1,0"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . ?rnode <nex:color> ?o . filter regex (?name, "P")}

modify delete {?rnode <nex:color> ?o} insert {?rnode <nex:color> "0,1,0"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . ?rnode <nex:color> ?o . filter regex (?name, "OP1")}

modify delete {?rnode <nex:color> ?o} insert {?rnode <nex:color> "0,1,0"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . ?rnode <nex:color> ?o . filter regex (?name, "OP2")}

Step #8 - Lastly, we set the phosphor atom display nodes to glow by inserting nex:glow statements attached to the corresponding phosphor atom display nodes.

insert {?rnode <nex:glow> "0.2"} where {?atom <nex:rnode> ?rnode . ?atom pdb:hasSpatialLocation ?loc . ?atom rdfs:label ?name . filter (?name="P")}

The resulting 3D RDF graph now looks like a DNA model that I had done with my prior Monolith project which dealt exclusively with non-RDF PDB-formatted data.  The DNA structure can be colored and effects set in many different ways by using the powerful new SPARQL 1.1 query language using any of the data present in the loaded RDF graph, not just what is displayed.  We can even access remote SPARQL end points and include their data as well.  Since Nexus handles any RDF, we are not limited to just molecular visualization.  We can branch off into other linked data by using the PubMed ID triple present into the RDF-converted PDB file and link over to PubMed publications data or anywhere else in the LOD (Linked Open Data) cloud.  For those of you thinking "these commands are not easy nor obvious" (perhaps to the SemWeb junkies) you would be correct.  I'm exploring ways in which the commands can be executed visually via the 3D front-end interface, but, I needed a flexible foundation on which to build and the SPARQL-driven engine seemed the best way to achieve this.  As it is, several of the above commands could be re-written to be a bit more compact and fewer, but, I am learning about this stuff myself as I go along.  I'm getting better. ;-)

Next Steps
I've been focused on doing the semantic web/molecular visualization cross-over and now that I've hit that milestone, there is some front-end and back-end work that still needs to be done.  The data is there, but I am not currently displaying any of it in the actual visualization (RDF labels and such).  I would also like to enable  a user to interact with the model graphically.  Interaction now is limited to command-line SPARQL commands only.  I had tossed out the half-SPARQL, half my own concoction commands in favor of the SPARQL.

This year, I did a poster presentation of Nexus combined with work that I have done with my colleagues at Stony Brook University (Dr. Janos Hajagos and Tammy DiPrima) for CSHALS 2011 (see poster here).  In the poster, I mentioned a couple of other things I am working on.  One of them is another Nexus front-end client based on WebGL/HTML5.   I had started this last year, but shelved it while I redesigned the Nexus back-end (server) to be all-RDF that it is now.  Now that the server is working again I will get back to the WebGL/HTML5 client.  As part of that project, I wanted to experiment with using WebSockets rather that http calls between the WebGL client and Nexus.  I will also update the original Nexus client which I did in Second Life, but, it will not be able to render as large of displays as I can in OpenSimulator since Linden Labs limits region objects to 15,000 primitives.  The DNA force-directed model seen here is 26,713 primitives, nearly twice what the Second Life regions allow.  But, I have provisions to allow a limited client to see a smaller window of a larger model.  All three clients will use the same back-end server and will be able to view any of the server models at the same time.  For example, 30 avatars in an OpenSimulator region will be able to work with 30 avatars in a Second Life region along with 30 different WebGL/HTML5 clients at the same time and see changes done from any of the clients live.  RDF breaks down the walled gardens between worlds.



3D RDF FOAF Graphs in OpenSimulator

Here is an image (click on image for larger image) of a model I did earlier with Nexus in Second Life using Tim Berner-Lee and James Hendler's FOAF data linked and visualized in 3D within OpenSimulator with Nexus.  The only code change needed port it over to OpenSimulator from Second Life was the removal of the warppos function since it is no longer needed, however, I think I may have uncovered a small bug/limitation in OpenSimulator URL lengths.  I put a small work-around by shortening the URL's to the FOAF data to avoid the bug when loading from the remote http source, but I will have to go back and figure out what is actually going on and report it if need be to the OpenSimulator programmers.  The Nexus commands used were:

color <1,0,0> spo where { ?s ?p ?o . filter ( ?p=foaf:knows ) }
color <0,1,0> spo where { ?s ?p ?o . filter ( ?p=rdf:type ) }
color <1,1,1> o where { ?s ?p ?o . filter ( isLiteral(?o) ) }
glow 0.2 o where { ?s ?p ?o . filter ( isLiteral(?o) ) }

Since the last time I did this FOAF data, I changed the default shape for literals to cubes and made them smaller so as not to have the literals dominate the scene as much.  Also added were some glow effects to high-light elements of interest.


3D RDF Model of RxNorm data

The following is a 3D RDF Model of RxNorm data of drugs (example here) that contain lithium carbonate as an ingredient.  The red balls are the drug products and the greens (difficult to see them in this view, easier to view in 3D) are the various ways lithium carbonate is listed in the RxNorm database.  Predicates rxnorm:ingredient_of and rxnorm:hac_ingredient are colored yellow.  The RxNorm data was converted/maintained by my colleague Dr. Janos Hajagos at Stony Brook University.  He participates in the W3C Linked Open Drug Data group.  The original RxNorm data can be found at:

and the RDF version can be found at our SPARQL endpoint/Triple Store (based on Virtuoso) at:

The data set for this model was constructed with the following RDF query:

prefix rxnorm: <>
construct {
  ?aui ?auip ?auio .
  ?auiingred ?auiingredp ?auiingredo . }
where {
<> rxnorm:hasRXCUI ?cui .
?aui rxnorm:hasRXCUI ?cui .
?aui ?auip ?auio .
?aui <> ?auiingred .
?auiingred ?auiingredp ?auiingredo

Nexus to this point, used FOAF data for testing.  I did this model to test with something a bit different.  This model is also a bit more complex and it will help with visualizations in the future as additional visual functions are added, as well as, modifications to the layout engine.


Color 3D RDF FOAF Graphs in Second Life

Added a new command for coloring of the 3D RDF graphs.  It uses the syntax:

color <r,g,b> spo where { pattern }


<r,g,b> is a rgb color vector.  <1,1,0> would be yellow.

spo is either spo, sp, so, po, s, p, o to have the command operate on one of the combinations of subject, predicate, and/or object of the triple.

where { pattern } is borrowed right out of SPARQL's syntax.  Nexus actually uses it's own ontology that is used to describe the visual features of the 3D graph such as {nex:color, nex:alpha, nex:xyz, etc}  All graph data can then be streamed out and include the Nexus ontology for persistence of a visualization session for later viewing or sharing.  In these three graphs, the @timberners_lee and @jahendler FOAF graph data that I have been using as test data are colored by issuing the following two commands:

color <1,0,0> spo where { ?s ?p ?o . filter ( ?p=foaf:knows ) }
color <0,1,0> spo where { ?s ?p ?o . filter ( ?p=rdf:type ) }

The first command colors all foaf:know triples red and the second command colors all triples that are rdf:type green.  The default color for everything is blue <0,0,1>  In the lower right image, the lone green ball is actually foaf:Person.  No, data that can be loaded can be any RDF not just FOAF.  This model is currently on display at the Stony Brook SOM region.  Yes, this is in Second Life, so, bring your friends and look at it together and use IM and voice to discuss it collaboratively while your there.  The region supports up to 100 concurrent users.



Subscribe to Semantic Web