Documentum Development under vCenter Lab Manager – Lessons Learned

I promised a few “lessons learned” and here they are!  Our initial implementation worked well, but there were little gotcha’s.  These were not really Lab Manager specific issues, rather our implementation.  None of these were show stoppers, but they were noticable enough that we decided they needed to be addressed.

Before I get into them, here is a quick recap of our lab manager configuration:

Physical ESX Server Environment

  • 4x Dell M610/R610 servers
  • 2x Intel 5570
  • 64GB RAM
  • Quad Gigabit NIC
  • QLogic 4GB/s HBA
  • vSphere ESX 4.1i Enterprise Plus

Lab Layout

Virtual Lab Environment

  • ad-01 – A Microsoft Windows 2003 Active Directory Server (and Subsequent DNS Services, 4GB RAM)
  • client-01 – A standard windows workstation layout as clients often use (1GB RAM)
  • dev-01 – A Documentum developer workstation with tools such as Composer, Process Builder, Forms Builder, DAB, XMing, Putty and other usefull little utilities. (2 GB RAM)
  • oracle-01 – An Oracle 11i RDBMS (8GB RAM – 4GB SGA)
  • contentstore-01 – Documentum Content Store (docbrokers, docbase, method server w/bpm installed, acs and thumbnail server, 4GB RAM)
  • avts-01 – This is a little misleading, originally intended to be an AVTS, but only acts as a MTS host. (4GB RAM)
  • fti-01 – Documentum Full Text Index server (4GB RAM)
  • webfe-01 – DamTop, Documentum Administrator and Taskspace running under tomcat (2GB RAM)
  • pi-01 – Process Integrator running under tomcat (2GB RAM)

As you can see, really, we’ve treated our lab as standalone environment.  To be very honest, this wasn’t the first environment that was raised.  The original environment utilized our standard naming conventions (not developer friendly, meant for IT Ops, not IS Developers) and had been missing some key services.  On the second development of a Vanilla image, here were the lessons we applied:

DNS services are critical both from a usability and functionality point of view.

  • For a developer, it’s much easier to remember “http://webfe-01:8080/da”  versus “http://10.255.0.10:8080/da”
  • Documentum needs properly functioning DNS for it’s lookups.  While this may be obvious, having to manage everything via hosts files and diagnosing errors can be a real pain.

Use hostnames that are obvious

  • Of course, it’s nice if our labs look allot like production, but it’s not always possible (our production environment has 29 hosts involved whereas a lab has 6 Documentum servers)
  • IT Operations uses hostnames that give them allot of information but is not always obvious   (ie:   is it a physical server? is it a vm?  Which site is it at?  Is it a corporate asset or divisional?  Cluster or not?  et cetra.).    Telling a developer,  “connect to http://moncorpvmdmd01:8080/da”  versus “http://webfe-01:8080/da” sure can be a tough thing (and a mouth full)
  • Having differing host names really has the biggest impact to the Deployment/Configuration manager but saves allot of time for all the users of a lab.  Like Spock once said:

Host Spanning is important!

Under vCenter Lab Manager, there is a feature called “Host Spanning”.  In our original implementation we had chosen to forgo this feature as it required many steps including migrating from vSwitches to dvSwitches, additional vLans from the network team, redeployment of the labs and additional configuration of the Lab Manager deployment.  Quite quickly, we found that we began creating “hotspots” inside of the ESX environment.

When this feature is not in use, effectively, lab manager deploys a private network switch (for the fenced networks we used, using a vSwitch) and would boot up the entire instance of the lab on a single ESX server.  Generally, this type of configuration is ideal where labs may be very network intensive and there might be sensativity to network congestion (not our case, 10-40GB of bandwidth between our edge and core switching).  This of course defeats any benefit from vCenter DRS as no balancing can be utilized.  If a VM somehow ends up on another ESX server (manual migration request for example), this single VM becomes cutoff from the lab and any network access it had (if it will even allow a vmotion)

If this feature is turned on and labs are deployed using the option “Host Spanning”, effectively this enables labs to have their VMs distributed across multiple hosts.  The obvious benefit is that this results in a better balance of resource utilization across the cluster and additionally enables DRS and vMotion to have it’s fun.  This does involve having additional vLan’s available for private use of vSphere, shared storage (who doesn’t do this for ESX?) and some additional configuration.

For us, enabling this functionality enabled us to nearly double the number of labs we could deploy, additional HA and simplified the job of IT Operations (allowing the labs to be managed like any other workloads).

There is a downfall thou! Lab Manager can be utilized on any VMWare vSphere v4.1 license.  Using host spanning requires the use of dvSwitches which last I looked (and someone correct me if I am wrong) is only available under the Enterprise Plus licensing.  These licenses of course do come at a higher cost but I bet your IT Operations team will appreciate them for all the additional features they provide.  (More on this in a different post)

Keep it Simple!

Sometimes we all suffer from wanting to complicate processes or technical solutions.  Originally, our environment was deployed with 4 separate workspaces (default main and 3 others).  After observing how our developers, deployment managers and integrators work, all the labs are more or less deployed into a single workspace.  Unless you have very specific permission requirements or the team are a stickler for segregation, simply using the workspace main or a single additional often suffices.

Tuning is key!

In our production environment, our little beauty reaches ingestions rate of 12,000 items per hour while barely breaking a sweat.  (Note, milage may vary, it’s all up to your specific Documentum application)  Obviously, in a production environment, both functionality and performance are key.  In a development environment, the functionality rules king but developers are quick to complain if they are left waiting for too long.  Here were the spots that got us:

  • Oracle

Originally configured with 16GB of ram, our RDBMS flew and things were quite “sexy”.  The key under vmware is the idea of deduped memory and hoping applications don’t use up all their memory.  Needless to say, oracle can be a little bugger as it’s SGA is quite hard to dedup and it will take every byte of memory you have (if you let it of course).  In the end, we ended up with a 8GB configuration with 6GB allocated to Oracle’s use.  If you have a good DBA at your disposal, their time and knowledge could probably help get this even smaller and more performant.  (Next step for us)

  • JBoss on Content Store

Ahhh yes, jboss/jvm tuning, always fun.  Be ready to go thru a fun exercise here.   Out of the box jboss in the lab was just too weak (gc settings, CPU resources and memory).  In a production environment where the jboss isn’t to heavily loaded, we’ll deploy DA along with the content store’s jboss.  Quickly we found that under our heavier regression and QA loads, the jboss was getting overwhelmed (doing full gc, getting choked up and activity timeouts getting hit).  First order of business is we moved DA out and memory tuning of the JVM/Jboss.  Once this was completed, performance, while not close to production, was more then acceptible for development and QA testing.

  • Patching of the Content Store

This is an obvious one, but we found certain issues were much more prevelant in the lab versus production due to the reduced capacity.  You’ll find that certain types of issues will manifest faster.  Get patched up!  While some might say “well, this isn’t good”.  We had the opposite reaction “cool!  We can hit DCTM issues faster and make it easier to produce common bugs and issues”.

Think of Ease of Use

This gets back to the hostname stuff a little bit, but I’ll expand on this further.  Standardize the username/password pairs, try and use standard pathes, startup/stop scripts (under RHEL, showing a developer to use the “service docbase start/stop” feature will save sooo much time, I’ll post up a few samples later) and trying to make things idiot proof will make life soooo much easier.

Docbrokers

A standard single content store configuration often comprises of a single docbroker on port 1489.  Often in a lab environment this will suffice but as lab manager allows for bridging your fenced network to a public network (uses a cool little ttylinux brouter), if your dev’s get the idea that they prefer to use their desktop installed Composer, you’ll run into IP issues very quickly.

In a normal configuration, the docbase contacts the docbrokers as configured in it’s server.ini file  (see the bottom of /opt/documentum/dba/<repo name>/server.ini.  When it contacts the docbroker, it announces the repository it is serving and it’s IP address.  Well, in a fenced network, this will result in an IP address and DFC client outside of the lab will not be able to contact.  Our workaround was to raise a second docbroker on 1491 and inside of the docbroker.ini file for this instance, utilizing a translation configuration (do a search on powerlink, it’s pretty easy).  Simply define the public IP to private IP mapping and ensure your server.ini is set to project to this new docbroker.

With all of this, of course there are other items I am forgetting, so stay tuned, I’ll write everyone another novel later.  Hope you enjoyed the post and please, geel free to post a commoent with your own experiences, tips’n’tricks or opinions.

Advertisements

About ericgrav
Senior technologist specializing in information management and dabblings into cloud computing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: