This is to serve as a repository of information about various STAR tools used in experimental operations.
Concatenate the following certs into one file in this example I call it: Global_plus_Intermediate.crt/etc/pki/tls/certs/wildcard.star.bnl.gov.Nov.2012.cert – host cert.
/etc/pki/tls/private/wildcard.star.bnl.gov.Nov.2012.key – host key (don’t give this one out)
/etc/pki/tls/certs/GlobalSignIntermediate.crt – intermediate cert.
/etc/pki/tls/certs/GlobalSignRootCA_ExtendedSSL.crt –root cert.
/etc/pki/tls/certs/ca-bundle.crt – a big list of many cert.
cat /etc/pki/tls/certs/GlobalSignIntermediate.crt > Global_plus_Intermediate.crt cat /etc/pki/tls/certs/GlobalSignRootCA_ExtendedSSL.crt >> Global_plus_Intermediate.crt cat /etc/pki/tls/certs/ca-bundle.crt >> Global_plus_Intermediate.crt
openssl pkcs12 -export -in wildcard.star.bnl.gov.Nov.2012.cert -inkey wildcard.star.bnl.gov.Nov.2012.key -out mycert.p12 -name tomcat -CAfile Global_plus_Intermediate.crt -caname root -chain
keytool -list -v -storetype pkcs12 -keystore mycert.p12
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="/home/lbhajdu/certs/mycert.p12" keystorePass="changeit" keystoreType="PKCS12" clientAuth="false" sslProtocol="TLS"/>
One particular detail to be aware of: the name of the pool nodes is now onlNN.starp.bnl.gov, where 01<=NN<=14. The "onllinuxN" names were retired several years ago.
Historical page (circa 2008/9):
GOAL:
Provide a Linux environment for general computing needs in support of the experiemental operations.
HISTORY (as of approximately June 2008):
A pool of 14 nodes, consisting of four different hardware classes (all circa 2001) has been in existence for several years. For the last three (or more?) years, they have had Scientific Linux 3.x with support for the STAR software environment, along with access to various DAQ and Trigger data sources. The number of significant users has probably been less than 20, with the heaviest usage related to L2. User authentication was originally based on an antique NIS server, to which we had imported the RCF accounts and passwords. Though still alive, we have not kept this NIS information maintained over time. Over time, local accounts on each node became the norm, though of course this is rather tedious. Home directories come in three categories: AFS, NFS on onllinux5, and local home directories on individual nodes. Again, this gets rather tedious to maintain over time.
There are several "special" nodes to be aware of:
PLAN:
For the run starting in 2008 (2009?), we are replacing all of these nodes with newer hardware.
The basic hardware specs for the replacement nodes are:
Dual 2.4 GHZ Intel Xeon processors
1GB RAM
2 x 120 GB IDE disks
These nodes should be configured with Scientific Linux 4.5 (or 4.6 if we can ensure compatibility with STAR software) and support the STAR software environment.
They should have access to various DAQ and Trigger NFS shares. Here is a starter list of mounts:
SERVER | DIRECTORY on SERVER | LOCAL MOUNT PONT | MOUNT OPTIONS |
evp.starp | /a | /evp/a | ro |
evb01.starp | /a | /evb01/a | ro |
evb01 | /b | /evb01/b | ro |
evb01 | /c | /evb01/c | ro |
evb01 | /d | /evb01/d | ro |
evb02.starp | /a | /evb02/a | ro |
evb02 | /b | /evb02/b | ro |
evb02 | /c | /evb02/c | ro |
evb02 | /d | /evb02/d | ro |
daqman.starp | /RTS | /daq/RTS | ro |
daqman | /data | /daq/data | rw |
daqman | /log | /daq/log | ro |
trgscratch.starp | /data/trgdata | /trg/trgdata | ro |
trgscratch.starp | /data/scalerdata | /trg/scalerdata | ro |
startrg2.starp | /home/startrg/trg/monitor/run9/scalers | /trg/scalermonitor | ro |
online.star | /export | /onlineweb/www | rw |
WISHLIST Items with good progress:
WISHLIST Items still needing significant work:
An SSH public key management system has been developed for STAR (see 2008 J. Phys.: Conf. Ser. 119 072005), with two primary goals stemming from the heightened cyber-security scrutiny at BNL:
A benefit for users also can be seen in the reduction in the number of passwords to remember and type.
In purpose, this system is similar to the RCF's key management system, but is somewhat more powerful because of its flexibility in the association of hosts (client systems), user accounts on those clients, and self-service key installation requests.
Here is a typical scenario of the system usage:
At this point, John Doe has key-based access to JDOE@FOO. Simple enough? But wait, there's more! Now John Doe realizes that he also needs access to the group account named "operator" on host BAR. Since his key is already in the key management system he has only to request that his key be added to operator@BAR, and voila (subject to administrator approval), he can now login with his key to both JDOE@FOO and operator@BAR. And if Mr. Doe should leave STAR, then an administrator simply removes him from the system and his keys are removed from both hosts.
There are three things to keep track of here -- people (and their SSH keys of course), host (client) systems, and user accounts on those hosts:
People want access to specific user accounts at specific hosts.
So the system maintains a list of user accounts for each host system, and a list of people associated with each user account at each host.
(To be clear -- the system does not have any automatic user account detection mechanism at this time -- each desired "user account@host" association has to be added "by hand" by an administrator.)
This Key Management system, as seen by the users (and admins), consists simply of users' web browsers (with https for encryption) and some PHP code on a web server (which we'll call "starkeyw") which inserts uploaded keys and user requests (and administrator's commands) to a backend database (which could be on a different node from the web server if desired).
Behind the scenes, each host that is participating in the system has a keyservices client installed that runs as a system service. The keyservices_client periodically (at five minute intervals by default) interacts a different web server (serving different PHP code that we'll call starkeyd). The backend database is consulted for the list of approved associations and the appropriate keys are downloaded by the client and added to the authorized_keys files accordingly.
In our case, our primary web server at www.star.bnl.gov hosts all the STAR Key Manager (SKM) services (starkeyw and starkeyd via Apache, and a MySQL database), but they could each be on separate servers if desired.
Perhaps a picture will help. See below for a link to an image labelled "SKMS in pictures".
We have begun using the Key Management system with several nodes and are seeking to add more (currently on a voluntary basis). Only RHEL 3/4/5 and Scientific Linux 3/4/5 with i386 and x86_64 kernels have been tested, but there is no reason to believe that the client couldn't be built on other Linux distributions or even Solaris. We do not anticipate "forcing" this tool onto any detector sub-systems during the 2007 RHIC run, but we do expect it (or something similar) to become mandatory before any future runs. Please contact one of the admins (Wayne Betts, Jerome Lauret or Mike Dephillips) if you'd like to volunteer or have any questions.
User access is currently based on RCF Kerberos authentication, but may be extended to additional authentication methods (eg., BNL LDAP) if the need arises.
Client RPMs (for some configurations) and SRPM's are available, and some installation details are available here:
http://www.star.bnl.gov/~dmitry/skd_setup/
An additional related project is the possible implementation of a STAR ssh gateway system (while disallowing direct login to any of our nodes online) - in effect acting much like the current ssh gateway systems role in the SDCC. Though we have an intended gateway node online (stargw1.starp.bnl.gov, with a spare on hand as well), it's use is not currently required.
Here you go: https://www.star.bnl.gov/starkeyw/
You can use your RCF username and Kerberos password to enter.
When uploading keys, use your SSH public keys - they need to be in OpenSSH format. If not, please consult SSH Keys and login to the SDCC.