This is the first of a three part series on reworking the Mugshot data model, looking at both the server and how the server communicates with clients. In this part, I'll examine how things are set up currently, extract some general principles of operation, and describe some goals to achieve in the rework. In the second part, I'll describe the underlying concepts of a new data model, and propose a client/server protocol. In the third part, I'll examine how we might implement the new model within the server.
Most data stored in the Mugshot server is stored in a SQL database. This includes user profiles and friends lists, content shared on the Mugshot site, per-user music history and so forth. The database also contains temporarily cached data retrieved from web services. We use the Hibernate implementation of EJB3 persistence as an object/relational layer on top of the database for convenience and to reduce the amount of raw SQL we need to write.
When we are making changes in the database, we use the Hibernate objects directly, but when presenting the data to a user, either in the JSP code to build a web page or, or when sending it over XMPP to the desktop client, we build on top of the raw database objects in two ways. First, we need to apply access controls. For example, we only want to show the email address for a user to people that that user has listed as a friend. Second, we want to augment the raw database information with additional information not stored in the database. For example, the database object for a user playing a song includes the song's artist, album name and title, but when displaying that song on the web page, we want to add to that an image of the album's cover art - something we retrieve by using Amazon and Yahoo web services.
These two processes of access control and augmentation are achieved by wrapping the database object in a view object. A GroupView object wraps the Group database object; a PersonView object wraps the User and Account database objects. The largest class of view objects are the subclasses of BlockView, representing the different types of notification blocks that can be displayed to the user; at the current time there are 21 different BlockView subclasses.
Augmentation can be with information retrieved from web services, but we also want to augment view information with aggregated statistics from our database, such as the number of web pages a person has shared on Mugshot or the number of members in a group. Constantly redoing these queries would be prohibitively expensive, so we need to cache them. This is done using the LiveState system. The LiveState system keeps a cache of LiveUser and LiveGroup, that store the aggregated statistics interesting to users and groups respectively. When we create a PersonView object, we look up the LiveUser object for that person, and if it doesn't exists, compute a new one and store it in the cache. The LiveState caches are per-cluster node; when one of the cluster nodes makes a change to the database that changes one of the cached statistics, it broadcasts a JMS message to the other cluster nodes to invalidate the corresponding LiveUser or LiveGroup object.
The LiveState system is also used to keep caches of the people that a user has listed as a friend and the people that have listed a user as a friend; being able to do quick tests of friend status is needed for access controls. You might think these would be fields of the LiveUser object but that turns out to be quite inefficient. We often need the aggregates but not the list of friends, or the list of "frienders" but not the list of friends. Instead there are two additional types of cached objects, one for each list. This points out a general problem with the LiveState system; working with the granularity of entire objects means that we are often computing information that we don't need.
The final piece of the data model is the PresenceService; we need to track who is online on the server, and also who is present in the different chat rooms. The PresenceService is a custom-written service built on top of JGroups that provides a cooperatively maintained view across the cluster of who is present at different "locations" (which are simply uninterpreted strings). One location represents online status, and there is a location for each chat room. A cluster node receives notifications from the PresenceService as presence changes.
In general, the protocol between the Mugshot server and the Mugshot desktop client is ad-hoc. The server has a range of XMPP IQ query messages that it responds to from the client, and also spontaneously sends different notification messages to the client when it thinks that the client would be interested. (For example, it sends a message to the client every time there are more notification blocks for the client to display.)
These IQ's and notifications have been built up over time as needed, and don't conform to any consistent model. The XML content of the the different IQ responses and notification messages is specific to the type of IQ or message. However, a common element we use in a number of places is a "view stream". A view stream is a serialization of a collection of view objects in the server. Each view object has methods to serialize it to XML and to find other objects that it references. To write each object to the XML view stream, the server first finds the referenced objects of that object, checks if they already have been written and if not writes them to the stream, then writes the object itself.
Writing each object in the stream to the wire only once saves a lot of work and bandwidth because its common that certain views will be referenced many times. For example, each web share notification block that a user receives will reference their own PersonView as a recipient; when the client connects to the server, it ask the server via IQ for the 30 most recent blocks; we don't want to include 30 copies of the user's PersonView. But if you look at the XMPP stream over time, there is still an incredible amount of duplication of information since a separate view stream is used for each message or IQ response.
One of the biggest problems that we have with the current protocol is lack of change notification. When a user goes to a web page they have an expectation that what they see is a snapshot at a particular point in time: it won't update until they explicitly refresh it. But the Mugshot client shows information "ambiently" to the user without active intervention on their part. In that case, the expectation changes to one where of continually up-to-date information. If the user is talking to a friend in a chat room, and the friend mentions that they just changed their Mugshot headshot, then the user expects to see the change. If someone adds you as a friend, you should now be able to see their email address in a list of contacts without having to restart the Mugshot client. Currently, updates happen only if the relevant view object is received again from the server for some other reason, or if we've added a special-case notification message for that particular update to the protocol.
We can extract some general principles and concepts from the above:
Reading is very different from writing. Almost all the complications mentioned above have only to do with presenting data to the user. Write operations are simple and straightforward manipulations of the database objects, like "change an email address" or "add this user as a friend". They also are vastly less common than read operations. Write operations may need to invalidate caches or generate change notifications, but the caches and change notifications themselves are there to facilitate the read part of the system.
Objects look different to different users. When two users look at the same object, access control restrictions can cause it to look different for the two users. Any system we design has to be aware of that; we couldn't for example, just cache prebuilt view objects. Conversely, if the system knows that a view object does look the same to all users, then significant efficiencies can be achieved.
The data model is big and diverse. The Mugshot server already has some 30 different types of view objects. Many of these objects have dozens of different properties. When we update the data model and add new types of objects or new properties to existing objects, we can't afford to have to update code in many different parts of the server. Adding a new properties or object types definitely shouldn't require changes in the caching and transport layers or in the client/server protocol.
Change notification should be almost universal. The client can display all sorts of information to the user, in ways we can't predict in advance when creating the server. Having generic change notification mechanisms allows correct display of information on the client without constantly adding new sorts of ad-hoc notifications. If we make change notification implicit in the protocol stream, we do even better: the server knows which data the client has requested; the client probably wants future updates on that same information. The caveat here is efficiency: some data might be too expensive to actively notify on; some notifications might be better done with special protocols rather than generically.
Cache object properties, not objects. If individual properties of an object are computed with separate code paths from separate sources then a cache that only caches the object in its entirety will cause significant inefficiency if we only need some of the objects. (Alternatively, we can break the object model down into fine-grained objects to the point where no object contains more than one separately-computed property. That, however, leads to an artificial organization into objects.)
Cache at a high level for efficiency. The data that we present to the user is significantly different from the raw data we store in our database. If we cache only at the low level, then we have to repeat a complex assembly process each time we view an object. For example, a MusicPersonBlockView contains information about the last 4 songs played by that person. To build it, we query the database for the recent songs, then for each song, do more database queries against our web services caches (or go to the web services directly) to find out play links and album art images. If 10 friends receive the the notification block, each of the 10 times we create a new view and repeat the entire process. By caching the properties of the MusicPersonBlockView in pre-computed form, we'd radically reduce the amount of work.
Trying to achieve all the principles we've laid out in one giant step would very likely be an effort doomed to failure. Still, from the above discussion we can extract places to start and an overall direction. We want a client/server protocol that supports change notification on object properties in a generic fashion. The server infrastructure needs to, at minimum, track what properties clients are listening to; caching can later be added to avoid recomputing data many times when sending it over the wire via XMPP. In the longer term, that caching system can be used when creating view objects for use in JSP pages as well, and should eventually replace the current LiveState system.
This is the second of a three part series on reworking the Mugshot data model, looking at both the server and how the server communicates with clients. In the first part, I examined how things are set up currently, extracted some general principles of operation, and described goals to achieve in the rework. In this part, I'll go on to describe the underlying concepts of a new data model, and propose a client/server protocol. The third part will cover implemention of the new model within the server.
It should be noted that while the data model and protocol are described below in quite some detail, this is still a proposal and a draft and does not correspond to currently working code.
The central concept of the data model is a resource. Each resource is identified by a unique URI, for example http://mugshot.org/o/user/61m76k3hGbRRFS (that's me). We'll usually use URLs for resources, but there is no implication about being able to retrieve any particular content from the URL. Resources belong to resource classes, also identified by an URI like http://mugshot.org/p/o/user.
Resources have properties; similar to resources, properties belong to classes uniquely identified by an URI. Property class URIs should have a fragment identifier that provides a short name for the property, and conventionally properties defined along with the class have the form <class id>#<name>, for example http://mugshot.org/p/o/user#email. Other than that the URIs are once again uninterpreted.
The value of a property can be a string, an URI, a number, or a a reference to another resource. Each property class is considered to have a fixed type. In the case where the property value is another resource, the class of that resource may be fixed to a single class, but properties classes that point to resources of multiple types are also possible. More complex property values could be allowed in the future, perhaps described by XML schemas, but for now, we'll disallow them.
A resource can have multiple property values of a particular property class; in that case, there is no ordering defined between the property values. Whether properties of a property class are allowed to have multiple values is defined with the property class. A property class can allow 0 or 1 values, exactly 1 property value, or any number of property values. (In other words, properties can be optional, mandatory, or set-valued.)
There is one other thing that is defined per property class: whether that property should be included by default when fetching properties of a resource. It generally only makes sense for properties defined along with the class to be default-fetched.
All resources have two special properties:
The above is obviously closely related to RDF. The biggest difference is that we aren't interested in multiple sources making non-authoritative, possibly conflicting statements about the value of a property; we consider there to be a definitive value or values for each pair of resource and property.. The Mugshot data model for a resource can be easily mapped onto a set of RDF triplets, but the data model is much more restricted (and thus hopefully easier to manage) than the general RDF model.
Similar to what is done in OWL, the Mugshot data model XML representation equates a URL with a fragment identifier like http://mugshot.org/p/o/user#email with the namespaced XML element <email xmlns="http://mugshot.org/p/o/user"/>.
In all the following examples, assume there is an enclosing element with the namespace declaration mugs:xmlns="http://mugshot.org/p/system".
The basic representation of a resource with properties looks like:
<mugs:resource xmlns="http://mugshot.org/p/o/user"> <mugs:resourceId>http://mugshot.org/o/user/61m76k3hGbRRFS</mugs:resourceId> <mugs:classId>http://mugshot.org/p/o/user</mugs:classId> <name>Owen Taylor</name> <email>otaylor@fishsoup.net</email> <email>otaylor@redhat.com</email> <groupMembership mugs:resource="http://mugshot.org/o/group/KyMn6M0l11Fnsr"/> </mugs:resource>
Using an attribute for the property of resource type allows special processing by intermediate layers that don't necessarily know the types of the properties in advance.
We allow a couple of abbrevations. First, the predefined mugs:resourceId and mugs:classId properties are allowed as an attribute of the resource element, rather than a child. (All other property values must be child elements.) Second, we recognize the element <resource xmlns="http://example/com/a/b"/> as equivalent to <mugs:resource mugs:classId="http://example/com/a"/>. (This means that calling a property 'resource' is, at the least, confusing.) These abbreviations allow the above to be written as:
<resource xmlns="http://mugshot.org/p/o/user" mugs:resourceId="http://mugshot.org/o/user/61m76k3hGbRRFS"> <name>Owen Taylor</name> <email>otaylor@fishsoup.net</email> <email>otaylor@redhat.com</email> <groupMembership mugs:resource="http://mugshot.org/o/group/KyMn6M0l11Fnsr"/> </resource>
There are only two basic operations that client makes:
Retrieve a list of resources from the server. The fetch string determines which properties of the resources are retrieved. The operation has no side effects (other than possibly establishing change notification for property values), and thus the results can be cached under certain circumstances.
The operation name is an URI with fragment identifier. Note that the operation name defines an operation, not who is handling it, so the operation http://mugshot.org/p/applications#popularApplications would keep the same name even if we were actually invoking it on a debug instance server debug.mugshot.org rather than on the production server mugshot.org.
Query parameter names are non-namespaced identifiers. Query parameter values are strings. A particular query might be documented to restrict the type of query parameters to be URL, an integer, or a resource ID, or similar.
The returned result consists of a set of resource IDs and a (possibly empty) empty set of property IDs and values for each resource ID. The result may contain both the resources that make up the result and other resources referenced by properties of those resources. The two sets of resources are distinguished in the protocol.
If the server knows that the client has previously received a property value and selected for change notification on it, it doesn't need to send it again. This means that the client is required to store any property values that it has selected for change notification.
Make a change on the server, no result. Operation and parameter names are as for queries.
There is also one notification sent from the server to the client:
Resources that were previously retrieved have changed.
Similar to a query result, the payload of the notification is a set of resource IDs and a (possibly empty) empty set of property IDs and values for each resource ID. The payload may contain resources whose properties have changed and other resources that are haven't changed but are referenced by newly added property values. As with a query result, the server doesn't need to send any property values for which it knows the client has an up-to-date value.
Notifications can represent the addition to or deletion from a set of property values already retrieved, or can replace the existing property values entirely.
There is no notification on the set of resources fetch by a query; however, property values of the returned resources are notified, unless specifically suppressed by the notify=false attribute in a fetch string (see below).
A fetched property is selected for notification even if there is no property of that type currently present on the resource.
At notification time, whether to recursively send a resource referenced by a notified property and what properties of the resource to send are determined by looking at the portion of the fetch string that applied to that property in the original query. If multiple queries have been made that retrieved the same property with different fetch strings, then what is sent is the union of what would be sent based on the individual fetch strings.
The properties to fetch with a QUERY are defined by a string; some examples of fetch strings for a "user" resource: for a 'user' resource
The BNF for fetch strings is roughly speaking:
FETCH_STRING := PROPERTY_FETCH ( ; PROPERTY_FETCH )* | <empty> PROPERTY_FETCH := PROPERTY_SPEC ( '[' FETCH_STRING ']' )? PROPERTY_SPEC := PROPERTY ( '(' ATTRIBUTES ')' )? PROPERTY := '+' | '*' | PROPERTY_NAME | PROPERTY_URL ATTRIBUTES := ATTRIBUTE ( ',' ATTRIBUTE ) * | <empty> ATTRIBUTE := ATTRIBUTE_NAME '=' VALUE
The system defines one query, http://mugshot.org/p/system#getResource, which returns a single resource with the ID specified by the resourceId parameter.
Rather than defining the XML mapping formally, I'll do it by example. A simple query with the query name http://mugshot.org/p/applications#popularApplications looks like:
<popularApplications xmlns="http://mugshot.org/p/applications" mugs:fetch="+"> <mugs:param name="maxResults">20</param> </popularApplications>
The result of this query looks like:
<popularApplications xmlns="http://mugshot.org/p/applications"> <resource xmlns="http://mugshot.org/p/o/application" mugs:resourceId="http://mugshot.org/o/application/mozilla-firefox" mugs:fetch="*"> <name>Firefox</name> <genericName>Web Browser</name> <tooltip>Browse the Web</tooltip> </resource [...] </popularApplications>
Note that the mugs:fetch attribute was both provided with the query and present on each resource returned with the reply. In the reply it indicates which properties of the resources were actually fetched. A value of '+' is not allowed in the reply, but a value of '*', which is not allowed for the request can be used. '*' means that all properties of the resource were fetched and will be subsequently notified. This allows the client to optimize and not have to ask the server about other properties if they are subsequently needed.
A notification resulting from the above might look like:
<mugs:notify> <resource xmlns="http://mugshot.org/p/o/application" mugs:resourceId="http://mugshot.org/o/application/mozilla-firefox"> <name mugs:notifyType="replace">Mozilla Firefox</name> </resource> </mugs:notify>
mugs:notifyType="replace" is actually the default and could have been ommitted.
A slightly more complicated example of a query and a response, using the built-in mugs:getResource query:
<mugs:getResource mugs:fetch="externalAccount +"> <mugs:param name="resourceId">http://mugshot.org/o/user/ABCXYZ12345</mugs:param> </mugs:getResource> <mugs:getResource> <resource xmlns="http://mugshot.org/p/o/externalAccount" mugs:resourceId="http://mugshot.org/o/externalAccount/ABCXYZ12345.FLICKR" mugs:indirect="true" mugs:fetch="*"> <service>FLICKR</service> <sentiment>LOVE</sentiment> <url>http://www.flickr.com/photos/26929211@N00</url> <photoUrl>http://www.flickr.com/favicon.ico</photoUrl> </resource> <resource xmlns="http://mugshot.org/p/o/user" mugs:resourceId="http://mugshot.org/o/user/ABCXYZ12345" mugs:fetch="externalAccount"> <externalAccount mugs:resource="http://mugshot.org/o/externalAccount/ABCXYZ12345.FLICKR"/> <externalAccount mugs:resource="http://mugshot.org/o/externalAccount/ABCXYZ12345.YOUTUBE"/> </resource> </mugs:getResource>
The mugs:indirect attribute indicates that the resource is not directly part of the reply but instead referenced a property in one of the resources of the response.
A couple of notifications that might result from the above:
<mugs:notify> <resource xmlns="http://mugshot.org/p/o/user" mugs:resourceId="http://mugshot.org/o/externalAccount/ABCXYZ12345.YOUTUBE" mugs:indirect="true" mugs:fetch="*"> <service>YOUTUBE</service> <sentiment>HATE</sentiment> <quip>Grainy video of people doing stupid things? Why?</quip> <photoUrl>http://www.youtube.com/favicon.ico</photoUrl> </resource> <resource xmlns="http://mugshot.org/p/o/user" mugs:resourceId="http://mugshot.org/o/user/ABCXYZ12345"> <externalAccount mugs:notifyType="add" mugs:resource="http://mugshot.org/o/externalAccount/ABCXYZ12345.YOUTUBE"/> </resource> </mugs:notify> <mugs:notify> <resource xmlns="http://mugshot.org/p/o/user" mugs:resourceId="http://mugshot.org/o/externalAccount/ABCXYZ12345.YOUTUBE"> <sentiment>LOVE</sentiment> <url>http://www.youtube.com/user/clarkbw</url> </resource> </mugs:notify>
Embedding the protocol into XMPP is straightforward:
The XML forms of the request and reply appear as child elements of the <iq/> element. The XML form of notification occurs as a child of the <x/> extension element for the headline message.
For writing desktop applications on the client side, we need not just the protocol, but APIs built on top of it. If we can keep the client API sufficiently similar to the protocol, then we can help make sure that what clients do can be expressed in efficient terms within the protocol. (Even if there are intermediate proxies and caching layers in between.)
In python we might have:
popularApplications = ddm.query("http://mugshot.org/p/applications#popularApplications", fetch="name", maxResults=10) for app in popularApplications: print app.name
In C, that might look like:
GSList *results; results = ddm_query("http://mugshot.org/p/applications#popularApplications", "name", "maxResults", DDM_INTEGER, 10, NULL); for (GSList *l = results; l; l = l->next) { DDMResource *o = l->data; const char *name; ddm_resource_get(o, "name", DDM_STRING, &name, NULL); g_print("Name is %s\n", name); } ddm_free_results(results);
Notification doesn't have to be complex. It could be as simple as:
user = ddm.getResource("http://mugshot.org/o/user/ABCXYZ12345", fetch="name") user.onNotify("name", updateUserName)
One of the tricker aspects of the protocol, and one not addressed above is keeping the set of notifications that the server is maintaining on behalf of the client from growing without bounds if a client stays connected for an extended period of time.
One simple mechanism would be to have the server simply discard old notification selections and send a message. This would be propagated much like a notification itself. If it got all the way down to an application and the application still needed the data, it would ask for it again.
That could be inefficient for big properties (like the set of all friends of a user), so a refinement would be to give advance warning, the server first sends a 'tryExpire' message, and the application can respond with a 'keepAlive'. If no application responds, then we proceed as above.
This is the last of a three part series on reworking the Mugshot data model. In the first part, I examined how things are set up currently, extracted some general principles of operation, and described goals to achieve in the rework. The second part described the the underlying concepts of a new data model, and proposed a client/server protocol. This part concludes by covering an API for the new model within the server, and how we might implement it.
As with the protocol, while the server implementation of the data model is described in a lot of detail below, it does not correspond to running code. Things may change greatly before the running code exists.
The server implementation has three primary functions. First, it takes care of the server side of the data model. It handles serialization of resources and properties to XML form, it tracks the properties that each client is "subscribed to" for change notification, and it handles sending out change notifications as necessary.
Second, the implementation is responsible for handling computation of resource property values and for applying access controls to restrict visibility of property values to viewers who are allowed to see them. This computation is used to feed the client/server protocol, but the goal is to use the same setup to produce objects that can be used in the JSP code in place of the current "View" objects.
Finally, the implementation acts as a cache of property values at a high level, to avoid continually recomputing them from scratch or from low level caches.
There are a lot of different aspects needed for the server side representation of a resource class. We need code to query the database and web service caches for the properties of the object. We need an object or interface to use from the web pages. We need a place to store the object's properties when caching.
If we used separate session beans, objects, and interfaces to handle all these aspects, we'd be creating a maintenance headache. A single property addition would require updates in four or five places. The approach we take instead is to centralize as much as possible into a single class, the "Data Model Object" (DMO), which contains the logic for looking up and computing property values. Annotations and byte-code-generation are used to take the DMO class and add the caching and filtering we need to use the data model object as a view onto the resource. Annotations also provide the information we needed to manage the client/server data-model protocol.
A very simple DMO with only a single property would look like:
@DMO(classId="http://mugshot.org/p/o/group", resourceBase="/o/group") public abstract class GroupDMO implements DMObject<Guid> { Guid key; Group group; // EJB3 persistence object, filled in by init method @EJB GroupSystem groupSystem; protected GroupDMO(Guid key) { this.key = key; } // Key that uniquely identifies this resource among resources of this class. // This must either be a GUID or class implementing DMKey public Guid getKey() { return key; } // Called before any @Property methods protected void init() throws NotFoundException { group = groupSystem.lookupGroupById(key); } @Property public String getName() { return group.name; } }
The resource ID is formed from the server's URL, the resourceBase, and the key; for example, a group on debug.mugshot.org with the GUID 'KyMn6M0l11Fnsr' would have the resource ID 'http://debug.mugshot.org/o/group/KyMn6M0l11Fnsr'.
Objects of the DMO type are never created directly. Instead, at run time, a proxy class is derived by byte code generation, that intercepts the property-fetching methods and adds caching and filtering.
The generated code for getName(), if written out in Java, would look something like:
private String name; public String getName() { // Look first in a local cache stored on the objectn if (name) return name; // Now look for a cached value in the system-wide cache name = (String)session.getCachedValue(GroupDMO.class, getKey(), "name"); if (name) return name; // Finally initialize the DMO and call the computation method there if (!initialized) { doInjections(); init(); initialized = true; } name = super.getName(); return name; }
Note that if every accessed property is already cached in the property cache, then a GroupDMO need never be initialized.
Data model objects are used within sessions. Like a Hibernate session, a data model session is scoped to the current transaction. The session object provides a central location for methods related to the data model. For example to get a GroupDMO for a group ID, we call:
ReadOnlySession.getCurrent().find(GroupDMO.class, groupId);
The data model session goes beyond the Hibernate model by having a fixed viewpoint associated with the session. Data model sessions are also either read-only or read-write. Read-only sessions are used for IQ's of type 'get', in JSP's, and so forth. They read from the property cache and cache new values there. It's an error to modify the database from within a read-only session.
Read-write sessions are used when modifying the database; they do not read from the property cache or cache new values there, though they may invalidate values in the property cache on commit.
The data model session is initialized explicitly when starting the transaction, and the viewpoint and type of the session are fixed at that time.
The idea that different users have a different view of an object is basic to our system. The simplest form of variability of a propertype value is to apply an access control rule to it. This is done by using the @Filter annotation, which can be applied either to a class or to a method.
Applied to a class, it restricts who can look up resources of that class. For example, to represent the fact that GroupDMO objects for private groups should only be visible to members of that group, we can add @Filter annotation as follows:
@DMO(classId="http://mugshot.org/p/o/group", resourceBase="/o/group") @Filter("viewer.canSeeGroup(key)") public class GroupDMO implements DMObject<Guid> { [....]
Applied to a single-valued property, the @Filter annotation determines whether the viewer sees a value for that property or not. For example, if we want to restrict the email property of a UserGMO to people who have listed the user as a friend, we'd use:
@Filter("viewer.isContactOf(key) || viewer.is(key)") public String getEmail() { ... }
We can also filter out a subset of properties from a multiple-valued property.
@Filter("viewer.receivesGroupPosts(item)") public List<GroupDMO> getGroupRecipients() { ... }
(This doesn't properly reflect the visibility rules for the recipients of a post in the Mugshot system, it's just here as an example.) Note that any filtering on the property is combined with the filtering specified for the class of the returned objects, so it wasn't necessary to say:
@Filter("viewer.receivesGroupPosts(item) && viewer.canSeeGroup(item)") public List<GroupDMO> getGroupRecipients() { ... }
It's also possible to uniformly filter all values of a multiple-valued properties based on whether a predicate is true for any value in the list:
@Filter("viewer.is(any) || viewer.isSystem()") public List<PersonDMO> getPersonRecipients() { ... }
The syntax of filter strings is pretty simple. Each term consists of:
viewer.<predicate>(<argument>)
And terms can be combined with '||', '&&', '!', and parenthesis in the standard way. <predicate> is one of a fixed set understood by the system, and <argument> is one of the following:
It's also possible to have properties that are actually different for different viewers, instead of just being present or not:
@Property @ViewerDependent("viewer.isMember(key)") public GroupMemberDMO getMember() { ... }
The (optional) filter string indicates that the property should be notified for a particular user if the given filter changes from true to false or vice-versa. There are some efficiency downsides to viewer-dependendent properties: for one thing they cannot be stored in the global property cache, and thus the DMO always has to be initialized before the property value is retrieved. In addition, other than for changes indicated marked by the filter string, there is no way to notify a change for just one user's view of the resource: a notification has to be sent to all subscribers for the resource/property pair even if the value doesn't change for most of them.
It's possible for DMO's to have getters that are just convenience for JSP pages instead of independent resource properties. Such getters should be written in terms of the resource properties so that initialization and caching are properly handled:
public MembershipStatus getStatus() { GroupMemberDMO member = getMember(); if (member != null) return member.getStatus(); else return MembershipStatus.NONMEMBER; }
Code that makes changes to database objects in ways that affect data model objects, triggers a notification using a method on the ReadWriteSession object.
session.notify(GroupDMO.class, groupGuid, "name");
This will look up all listeners for the name property of the given resource, and send out notification.
Notification can also occur because of database changes that affect filtering. For example, when we add a contact B to user A, we look for property subscriptions where the viewer is user A, and the watched property has the filter viewer.isContactOf(<guid B>) and trigger notifications of all such properties. (The same mechanism is used for handling the filter argument to @ViewerDependent)
GroupDMO uses a GUID as its key. But not all types of resources are keyed by a single GUID. For example, a GroupMember resource is keyed by both a GUID for the group and a GUID for the member resource. For shuch cases, we can define a custom key class. A custom key classe must define transformations to and from strings, along with hash() and equals().
public class GroupMemberKey implements DMKey { Guid groupId; Guid memberId; public GroupMemberKey(Guid groupId, Guid memberId) { this.groupId = groupId; this.memberId = memberId; } public GroupMemberKey(String string) throws BadKeyException { String[] components = string.split("."); if (components.length != 2) throw new BadKeyException("GroupMember key should have two components;); try { groupId = new Guid(components[0]); memberId = new Guid(components[1]); } catch (ParseException e) { throw new BadKeyException("Bad GUID in GroupMember key"); } } // Convenience constructor public GroupMemberKey(GroupMember member) { this.groupId = member.getGroup().getGuid(); this.memberId = member.getMember().getGuid(); } public Guid getGroupId() { return this.groupId; } public Guid getMemberId() { return this.memberId; } public String toString() { return this.groupId.().toString() + "." + this.memberId().toString(); } public equals(GroupMember other) { return groupId.equals(other.groupId) && memberId.equals(other.memberId); } public long hash() { return groupId.hash() * 13 + memberId.hash() * 17; } }
Usage of GroupMemberKey in code to look up GroupMemberDMO objects looks like:
@Property List<GroupMemberDMO> getMembers() { List<GroupMemberDMO> result = new ArrayList<GroupMemberDMO>(); for (GroupMember : groupMember) { result.add(session.find(GroupMemberDMO.class, new GroupMemberKey(groupMember))); } }
The init() function for the GroupMemberDMO object would look like:
private GroupMember member; protected void init() throws NotFoundException { Group group = groupSystem.lookupGroupById(key.getGroupId()); member = groupSystem.getGroupMember(group, key.getResourceId()); }
If you study the above code carefully, you may see an efficiency trap, at least if you are familiar with the internal workings of the Mugshot server. For each GroupMember object we extract the group and member ids and store them them in the key, then later when we initialize the GroupMemberDMO object, we call groupSystem.getGroupMember() to get the GroupMember object back. But groupSystem.getGroupMember(group,resource) is quite inefficient since it has to needs to linearly search the list of group members, making the overall operation O(n^2) in the number of members.
We can get around this with a little modification to our GroupMember object, changing the convenience constructor GroupMemberKey(GroupMember member) to cache the value passed in. We override clone() to null out this member, to avoid storing (detached) persistence objects into the property cache.
public class GroupMemberKey implements DMKey { Guid groupId; Guid memberId; GroupMember member; [...] public GroupMemberKey(GroupMember member) { this.groupId = member.getGroup().getGuid(); this.memberId = member.getMember().getGuid(); this.member = member; } @Override public Object clone() { Object c = super.clone(); c.member = null; return c; } public GroupMember getMember() { return member; } [...] }
Then the init function for GroupMember DMO can used this cached member like:
protected void init() throws NotFoundException { member = key.getMember(); if (!member) { Group group = groupSystem.lookupGroupById(key.getGroupId()); member = groupSystem.getGroupMember(group, key.getResourceId()); } }
The property cache caches property values, not DMOs. Caching property values not DMOs is necessary because the DMO itself, once initialized, has fields that are tied to a particular transaction. Caching property values also makes it much easier to handle concurrency issues; we can invalidate or update values in the property cache for a resource without worrying about what transactions might currently be accessing DMOs for that object.
If the property value is a string, an integer, or similar, it is stored in the cache as is. If the property value is a resource, then what is stored in the cache is not the DMO that is returned from the property getter, but rather the key for that DMO.
Efficient memory usage in the cache is important in the long term. One thing that can facilitate this is the knowledge that the set of properties for a particular DMO class is fixed and known in advance, so we can assign integer indices for each property of the class. This allows using arrays rather than hash tables to track property values. (Whether this will be more efficient will depend on the percentage of properties for a object that are typically cached.) It's important to distinguish cached-as-null and not-cached; if we use an array, this can be done by using a not-null "nil" object to represent cached-as-null.
Each subscription to notification of changes for a resource corresponds logically to a "fetch string" in the protocol. If we compile these fetch strings to objects, then we can get a very space efficient representation for subscriptions by noting that the same fetch specification will be repeated over and and over again, and the compiled specification can be shared in every use, allowing the subscription to be represented by simply a pointer to the shared compiled specification.
The other thing that we need to track is what properties need to be notified because of the filters appplied to the properties. A space-efficient way to do this is to keep a mapping of realized filter terms (like 61m76k3hGbRRFS.canSeeGroup(KyMn6M0l11Fnsr)) to resource. When we make a change that affects the filter term, we can look up subscriptions to that resource by that viewer, examine the filter for each property that is subscribed to and see which ones reference the filter term.
Writing down a cool looking API as we've done above is one thing; actually getting it implemented and moving to it something entirely different and more challenging.
Testing is clearly an important aspect in managing the complexity. It's at least worth considering whether it would make sense to implement the caching system as a library that is independent from the operation of the Mugshot server. Writing unit tests against the full Mugshot server data model is tricky, since it is highly interrelated, dependent on Hibernate, and otherwise messy. Most of the infrastructure described above should work well standalone; the only sticking point that is immediately apparent is that the filter predicates described above are Mugshot specific. To make the caching system a standalone library, the predicate set would need to to be extensible rather than built in to the system.
Migrating existing code to the new system can be done resource type by resource type in an incremental fashion. When necessary, a DMO class and a View class can exist side-by-side for a resource type until the migration of all uses of the View to the DMO is complete. (In fact, the View objects probably will have to be left around in their current form until we're ready to remove the old parts of the client/server protocol, since the current protocol stream is largely written out from inside the View objects.) By far the trickiest part of migration will be porting the 12,000 lines of JSP pages and tag files to the new system, since run-time references to the view objects from the JSTL are not caught by the compiler. Our use of JSP and the JSTL is quite stylized, so it would be possible with some effort to write automatic tools to check for errors; whether this would save time in the end is an open question.
Like any significant code change, implementing a new data model for the Mugshot server has significant risks for delays and regressions. What is described here is also largely novel, and while it is strongly motivated by our 18 months of working on Mugshot so far, it may simply be unworkable for reasons that aren't readily apparent to us. If those risks can be managed, then the opportunity to put our system on a sounder footing and make a big step up in scalability and simplicity is large.