Speeding up DB access using SSD or Memory

This blog will contain a collection of information on IO tests for speeeding up database access using SSD drives or Ram based disk. A few background information will be provided.

 

Rational, background

This set of ideas started out of observing inneficiencies from production jobs as described in this ticket. The general observation was that while the Db maker spends, in normal circunstances, 2.4% of the full chain in RealTime for 0.6% of CPU time for physics data, it spends 75.4% of real time (for 1% of CPU time) for stream data making the jobs utterly long to process. A summary of time spent appears below

  • Under full farm occupancy
    • stream data Ast=75.4% Cpu = 1.0%
      Ast=87.0% Cpu = 0.8%
      so let's say around a ~ Ast=80%+/-5 and Cpu=0.9%+/-0.1 (ratio 400)
       
    • non-stream data
      Ast=2.4% Cpu = 0.6% (ratio 4)
       
  • Under dedicated server unloaded
    • stream data Ast=9.0% Cpu = 0.2% (ratio 45)

Investigation revealed that the hit-cache is very poor under the stream load (a few %) indicating that the data cached on the server is no re-usable. The sparsity of the data in streams basically kills the cache idea.

The ideas from this points on were:

  • Widely distributing the Database in a highly scalable distribution of DB entry points
    • Would the "Data on demand" project be  asolution? Probably too slow to build although the full database may fit on local disk.
    • Starting a database service on each node? Not practical for a farm like the RCF (but could be used in Cloud context)
       
  • Moving the MeataData with the data - we may consider re-packing the Database data into files
    • ROOT files may be used if we know the schema
    • Extraction of slice usable by other light Database technologies (SQLLite)
    • ... any other scheme along this idea ...

Feasibility is part of the projections section below.

 

Projections

A review of the database size (and Offline DB size) revealed that the database is about 20 GB in size and that SSD calibrations from year 2005 are responsible for 3/4 of the Offline db size (15 GB out of 20 GB) for a single year. Otherwise, the calibrations db size per year is almost stable (~0.25 GB), with a bump to 0.45 GB in Y2009. At most, a year slicing is then 0.5 GB which is tiny at the moment.

However, past size and projections may indicate a rapid growth with new detectors - especially, the SSD+HFT combo (or the HFT tracking system).

 

Related work