This is to serve as a repository of information about various STAR tools used in experimental operations.
This section contains information about using EVO for STAR meetings.
If you would like to be able to use EVO in the 1006 trailer, there is a conference PC setup for use. There is a generic account on the computer for everyone to share.
The account credentials are:
Username: rhicstar
Password: (See below)
Log On To: Conference (This computer)
I will not post the password anywhere that is not encrypted for security purposes, so please come see me in my office (Building 510 Room 1-179) or send me an e-mail containing your GPG public key. If you do not have a GPG public key, please bring your laptop, (for desktop users, call me, and I'll come to see you) and I'll help you set it up. It is quite useful.
FUSE is a kernel module that acts as a bridge between the kernel’s built-in filesystem functions and user-space code that “understands” the (arbitrary) structure of the mounted content. It allows non-root users to add filesystems to a running system.
Typically, FUSE-mounted filesystems are (nearly) indistinguishable from any other mounted filesystem to the user.
Some examples of FUSE in action:
The Fuse project FileSystems page has a more complete list and links to individual software projects that use FUSE.
SSHFS allows a user (not necessarily root) on host A (the "client") to mount a directory on host B (the "server") using the (almost) ubiquitous SSH client-server communication protocols. Generally, no configuration changes or software installations are required on host B.
The directory on host B then looks like a local directory on host A, at a location in host A's directory structure chosen by the user (in a location where user A has adequate privileges of course).
Unlike NFS, the user on host A must authenticate as a known user on host B, and the operations performed on the mounted filesystem are performed as known user on host B. This avoids the "classic" NFS problem of UID/GID clashes between the client and server.
Here is a sample session with some explanatory comments:
In this example, host A is "stargw1" and host B is "staruser01". The user name is wbetts on both hosts, but the user on host B could be any account that the user can access via SSH.
First, create a directory that will serve as the mountpoint:
[wbetts@stargw1 ~]$ mkdir /tmp/wbssh [wbetts@stargw1 ~]$ ls -ld /tmp/wbssh drwxrwxr-x 2 wbetts wbetts 4096 Oct 13 10:52 /tmp/wbssh
Second, mount the remote directory using the sshfs command:
[wbetts@stargw1 ~]$ sshfs staruser01.star.bnl.gov: /tmp/wbssh
In this example, no remote username or directory is specified, so the remote username is assumed to match the local username and the user’s home directory is selected by default. So the command above is equivalent to:
% sshfs wbetts@staruser01.star.bnl.gov:/home/wbetts /tmp/wbssh
That’s it! (No password or passphrase is required in this case, because wbetts uses SSH key agent forwarding)
Now use the remote files just like local files:
[wbetts@stargw1 ~]$ ls -l /tmp/wbssh |head -n 3 total 16000
-rw-rw-r-- 1 1003 1003 6412 Oct 19 2005 2005_Performance_Self_Appraisal.sxw
-rw-rw-r-- 1 1003 1003 10880 Oct 19 2005 60_subnet_PLUS_SUBSYS.sxc [wbetts@stargw1 ~]$ ls -ld /tmp/wbssh drwx------ 1 1003 1003 4096 Oct 11 15:56 /tmp/wbssh
The permissions on our mount point have been altered -- now the remote UID is shown (a source of possible confusion) and the permissions have morphed to the permissions on the remote side, but this is potentially misleading too…
[root@stargw1 ~]# ls /tmp/wbssh ls: /tmp/wbssh: Permission denied
Even root on the local host can’t access this mount point, though root can see it in the list of mounts.
In addition to the ACL confusion, there can be some quirks in behaviour, where sshfs doesn't translate perfectly:
[wbetts@stargw1 ~]$ df /tmp/wbssh Filesystem 1K-blocks Used Available Use% Mounted on
sshfs#staruser01.star.bnl.gov: 1048576000 0 1048576000 0% /tmp/wbssh
Ideally the user unmounts it once finished, else it sits there indefinitely (it is probably subject to the same timeouts (TCP, firewall conduit, SSH config, etc.) as an ordinary ssh connection, but in limited testing so far, the connection has been long term) Here is the unmount command:
[wbetts@stargw1 ~]$ fusermount -u /tmp/wbssh/ [wbetts@stargw1 ~]$ ls /tmp/wbssh [wbetts@stargw1 ~]$
Some additional details:
By default, users other than the user who initiated the mount are not permitted access to the local mountpoint (not even root), but that can be changed by the user, IF it is permitted by the FUSE configuration (as decided by the admin of the client node). The options though are not very granular. The three possible options are:
In any case, whoever accesses the mount point will act as (and have the permissions of) the user on host B specified by the mounter. This requires careful evaluation of the options permitted and user education on the possibilities of allowing inappropriate or unnecessary access to other users.
The mount is not tied to the specific shell it is started in. It lasts indefinitely it seems – the user can log out of host A, kill remote agents, etc. and the mount remains accessible on future logins. (Interpretation: an agent of some sort is maintained on the client (host A) on the user’s behalf. (If multiple users have access to the user account on A, this could be worrisome, in the same manner as the allowance of others to access the mount point mentioned above.))
Here are some potential advantages and benefits of using SSHFS, some of which are mentioned above:
And some drawbacks:
And some final details about the configuration of the online gatekeepers that presumably are prime candidates for the use of SSHFS:
The standard installation of FUSE for Scientific Linux 4 seems to not be quite complete. A little help is required to make it work:
In /etc/rc.d/rc.local:
/etc/init.d/fuse start /bin/chown root.fuse /dev/fuse /bin/chmod 660 /dev/fuse
“fuse” group created – each user who will use SSHFS needs to be a member of this group (must be kept in mind if we use NIS or LDAP for user management on the gateways)
The default openssh packages from Scientific Linux 3, 4 and 5 (~openssh 3.6, 3.9 and 4.3 respectively) do not support sftp-subsystem logging. Later versions of openssh do (starting at version ~4.4). This provides the ability to log file accesses and trace them to individual (authenticated) users.
I grabbed the latest openssh source (version 5.1) and built it on an SL4 machine with no trouble:
% ./configure --prefix=/opt/openssh5.1p1 --without-zlib-version-check --with-tcp-wrappers % make % make install
Then in the sshd_config file, append "-f AUTHPRIV -l INFO" to sftp-subsystem line. This activates the logging level (INFO) and causes the logs to be sent to /var/log/secure. (To be tried: VERBOSE log level).
Even at the INFO level, the logs are fairly detailed. Shown below is a sample session, with the client commands on the left and the resulting log entries from the server (carradine, using port 2222 for testing) on the right. For brevity, the time stamps from the log have been removed after the first entry.
CLIENT COMMANDS | SERVER LOG (/var/log/secure) |
sshfs -p 2222 wbetts@carradine.star.bnl.gov:/home/wbetts/ carradine_home | Nov 20 14:30:29 carradine sshd[29120]: Accepted publickey for wbetts from 130.199.60.84 port 41746 ssh2 carradine sshd[29122]: subsystem request for sftp carradine sftp-server[29123]: session opened for local user wbetts from [130.199.60.84] |
ls carradine_home | carradine sftp-server[29123]: opendir "/home/wbetts/." carradine sftp-server[29123]: closedir "/home/wbetts/." |
touch carradine_home/test.txt | carradine sftp-server[29123]: sent status No such file carradine sftp-server[29123]: open "/home/wbetts/test.txt" flags WRITE,CREATE,EXCL mode 0100664 carradine sftp-server[29123]: close "/home/wbetts/test.txt" bytes read 0 written 0 carradine sftp-server[29123]: open "/home/wbetts/test.txt" flags WRITE mode 00 carradine sftp-server[29123]: close "/home/wbetts/test.txt" bytes read 0 written 0 carradine sftp-server[29123]: set "/home/wbetts/test.txt" modtime 20081120-14:36:36 |
cat /etc/DOE_banner >> carradine_home/test.txt | carradine sftp-server[29123]: open "/home/wbetts/test.txt" flags WRITE mode 00 carradine sftp-server[29123]: close "/home/wbetts/test.txt" bytes read 0 written 1119 |
rm carradine_home/test.txt | carradine sftp-server[29123]: remove name "/home/wbetts/test.txt" |
fusermount -u carradine_home/ | carradine sftp-server[29123]: session closed for local user wbetts from [130.199.60.84] |
From these logs, we would appear to have a good record of the who/what/when of sshfs usage. But the need to build our own openssh packages puts a burden on us to track and install updated openssh versions in a timely fashion, rather than relying on the distribution maintainer and the OS's native update manager(s). The log files on a heavily utilised server may also become unwieldy and cause a performance degredation, but I've not made any estimates or tests of these issues.
Here are the specific relevant packages installed on the client test nodes (stargw1 and stargw2):
fuse-2.7.3-1.SL
fuse-libs-2.7.3-1.SL
fuse-devel-2.7.3-1.SL
fuse-sshfs-2.1-1.SL
kernel-module-fuse-2.6.9-78.0.1.ELsmp-2.7.3-1.SL
(Exact versions should not be terribly important, but it appears that fuse-2.5.3 included up to SL4.6 requires more tweaking after installation than fuse 2.7.3 included in SL4.7).
Concatenate the following certs into one file in this example I call it: Global_plus_Intermediate.crt/etc/pki/tls/certs/wildcard.star.bnl.gov.Nov.2012.cert – host cert.
/etc/pki/tls/private/wildcard.star.bnl.gov.Nov.2012.key – host key (don’t give this one out)
/etc/pki/tls/certs/GlobalSignIntermediate.crt – intermediate cert.
/etc/pki/tls/certs/GlobalSignRootCA_ExtendedSSL.crt –root cert.
/etc/pki/tls/certs/ca-bundle.crt – a big list of many cert.
cat /etc/pki/tls/certs/GlobalSignIntermediate.crt > Global_plus_Intermediate.crt cat /etc/pki/tls/certs/GlobalSignRootCA_ExtendedSSL.crt >> Global_plus_Intermediate.crt cat /etc/pki/tls/certs/ca-bundle.crt >> Global_plus_Intermediate.crt
openssl pkcs12 -export -in wildcard.star.bnl.gov.Nov.2012.cert -inkey wildcard.star.bnl.gov.Nov.2012.key -out mycert.p12 -name tomcat -CAfile Global_plus_Intermediate.crt -caname root -chain
keytool -list -v -storetype pkcs12 -keystore mycert.p12
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="/home/lbhajdu/certs/mycert.p12" keystorePass="changeit" keystoreType="PKCS12" clientAuth="false" sslProtocol="TLS"/>
One particular detail to be aware of: the name of the pool nodes is now onlNN.starp.bnl.gov, where 01<=NN<=14. The "onllinuxN" names were retired several years ago.
Historical page (circa 2008/9):
GOAL:
Provide a Linux environment for general computing needs in support of the experiemental operations.
HISTORY (as of approximately June 2008):
A pool of 14 nodes, consisting of four different hardware classes (all circa 2001) has been in existence for several years. For the last three (or more?) years, they have had Scientific Linux 3.x with support for the STAR software environment, along with access to various DAQ and Trigger data sources. The number of significant users has probably been less than 20, with the heaviest usage related to L2. User authentication was originally based on an antique NIS server, to which we had imported the RCF accounts and passwords. Though still alive, we have not kept this NIS information maintained over time. Over time, local accounts on each node became the norm, though of course this is rather tedious. Home directories come in three categories: AFS, NFS on onllinux5, and local home directories on individual nodes. Again, this gets rather tedious to maintain over time.
There are several "special" nodes to be aware of:
PLAN:
For the run starting in 2008 (2009?), we are replacing all of these nodes with newer hardware.
The basic hardware specs for the replacement nodes are:
Dual 2.4 GHZ Intel Xeon processors
1GB RAM
2 x 120 GB IDE disks
These nodes should be configured with Scientific Linux 4.5 (or 4.6 if we can ensure compatibility with STAR software) and support the STAR software environment.
They should have access to various DAQ and Trigger NFS shares. Here is a starter list of mounts:
SERVER | DIRECTORY on SERVER | LOCAL MOUNT PONT | MOUNT OPTIONS |
evp.starp | /a | /evp/a | ro |
evb01.starp | /a | /evb01/a | ro |
evb01 | /b | /evb01/b | ro |
evb01 | /c | /evb01/c | ro |
evb01 | /d | /evb01/d | ro |
evb02.starp | /a | /evb02/a | ro |
evb02 | /b | /evb02/b | ro |
evb02 | /c | /evb02/c | ro |
evb02 | /d | /evb02/d | ro |
daqman.starp | /RTS | /daq/RTS | ro |
daqman | /data | /daq/data | rw |
daqman | /log | /daq/log | ro |
trgscratch.starp | /data/trgdata | /trg/trgdata | ro |
trgscratch.starp | /data/scalerdata | /trg/scalerdata | ro |
startrg2.starp | /home/startrg/trg/monitor/run9/scalers | /trg/scalermonitor | ro |
online.star | /export | /onlineweb/www | rw |
WISHLIST Items with good progress:
WISHLIST Items still needing significant work:
An SSH public key management system has been developed for STAR (see 2008 J. Phys.: Conf. Ser. 119 072005), with two primary goals stemming from the heightened cyber-security scrutiny at BNL:
A benefit for users also can be seen in the reduction in the number of passwords to remember and type.
In purpose, this system is similar to the RCF's key management system, but is somewhat more powerful because of its flexibility in the association of hosts (client systems), user accounts on those clients, and self-service key installation requests.
Here is a typical scenario of the system usage:
At this point, John Doe has key-based access to JDOE@FOO. Simple enough? But wait, there's more! Now John Doe realizes that he also needs access to the group account named "operator" on host BAR. Since his key is already in the key management system he has only to request that his key be added to operator@BAR, and voila (subject to administrator approval), he can now login with his key to both JDOE@FOO and operator@BAR. And if Mr. Doe should leave STAR, then an administrator simply removes him from the system and his keys are removed from both hosts.
There are three things to keep track of here -- people (and their SSH keys of course), host (client) systems, and user accounts on those hosts:
People want access to specific user accounts at specific hosts.
So the system maintains a list of user accounts for each host system, and a list of people associated with each user account at each host.
(To be clear -- the system does not have any automatic user account detection mechanism at this time -- each desired "user account@host" association has to be added "by hand" by an administrator.)
This Key Management system, as seen by the users (and admins), consists simply of users' web browsers (with https for encryption) and some PHP code on a web server (which we'll call "starkeyw") which inserts uploaded keys and user requests (and administrator's commands) to a backend database (which could be on a different node from the web server if desired).
Behind the scenes, each host that is participating in the system has a keyservices client installed that runs as a system service. The keyservices_client periodically (at five minute intervals by default) interacts a different web server (serving different PHP code that we'll call starkeyd). The backend database is consulted for the list of approved associations and the appropriate keys are downloaded by the client and added to the authorized_keys files accordingly.
In our case, our primary web server at www.star.bnl.gov hosts all the STAR Key Manager (SKM) services (starkeyw and starkeyd via Apache, and a MySQL database), but they could each be on separate servers if desired.
Perhaps a picture will help. See below for a link to an image labelled "SKMS in pictures".
We have begun using the Key Management system with several nodes and are seeking to add more (currently on a voluntary basis). Only RHEL 3/4/5 and Scientific Linux 3/4/5 with i386 and x86_64 kernels have been tested, but there is no reason to believe that the client couldn't be built on other Linux distributions or even Solaris. We do not anticipate "forcing" this tool onto any detector sub-systems during the 2007 RHIC run, but we do expect it (or something similar) to become mandatory before any future runs. Please contact one of the admins (Wayne Betts, Jerome Lauret or Mike Dephillips) if you'd like to volunteer or have any questions.
User access is currently based on RCF Kerberos authentication, but may be extended to additional authentication methods (eg., BNL LDAP) if the need arises.
Client RPMs (for some configurations) and SRPM's are available, and some installation details are available here:
http://www.star.bnl.gov/~dmitry/skd_setup/
An additional related project is the possible implementation of a STAR ssh gateway system (while disallowing direct login to any of our nodes online) - in effect acting much like the current ssh gateway systems role in the SDCC. Though we have an intended gateway node online (stargw1.starp.bnl.gov, with a spare on hand as well), it's use is not currently required.
Here you go: https://www.star.bnl.gov/starkeyw/
You can use your RCF username and Kerberos password to enter.
When uploading keys, use your SSH public keys - they need to be in OpenSSH format. If not, please consult SSH Keys and login to the SDCC.
The STAR (ESL) Electronic Shiftlog is written in JSP (Java server pages) and requires a web server that can render JSP content. Unlike php JSP is compiled into JAVA classes using a method call “Just in Time” this means the page is compiled the first time the page is accessed, then it does not have to be compiled again for the life of the page or until the page is modified. The forbearer of JSP is serverlets these are also used in the shiftlog mostly to stream images. The technology differs in that serverlets need to be compiled in advance of being deployed.
Our JSP server is Apache Tomcat. Documentation and newer versions can be downloaded from http://tomcat.apache.org/. Although tomcat is a fully functional web server unto its self we prefer to allow the Apache web server to serve the HTML content and only require Tomcat to serve the JSP pages that Apache can not. This is accomplished by way of the mod_jk Apache Tomcat Connector using the ajp13 protocol. Tomcat hosts on port 8080. This is blocked from the outside but can be seen on a browser started up on the online web server its self.
The Tomcat server hosting the shiftlog is deployed on the online web server online.star.bnl.gov and run under the tomcat account. In order to log on to the online web server to administrate Tomcat and the ESL you will need keys mapped to the Tomcat user account. Please see Wayne Betts or Jérôme Lauret about getting your keys mapped. There are multiple version of Tomcat residing in /opt.
All versions of tomcat are placed in the /opt folder, in a sub folder clearly demoting the version number. (When you unzip Tomcat this is usually how it comes.) Examples are:
/opt/apache-tomcat-5.5.20/ /opt/apache-tomcat-6.0.18/
The currently used version of Tomcat is link to /opt/tomcat/. Below is an ls of the tomcat folder:
-bash-3.00$ ls -l /opt/tomcat lrwxrwxrwx 1 root root 22 Nov 17 11:11 /opt/tomcat -> ./apache-tomcat-6.0.18
Note that this folder is the tomcat’s users home directory. It contains the .ssh folder which holds your keys, so relinking this may cause you to become locked out if you do not transfer this folder in advance.
After you install a new version of Tomcat you will want to configure it.
There are some environment variables whose existences you will want to verify, and if they don’t exist you will want to set them, preferably in a start-up script so they will survive a server restart.
$CATALINA_HOME: /opt/tomcat $JAVA_HOME: /usr/java/default
Inside the Tomcat folder you will find these directories (and some others):
$CATALINA_HOME/bin/ $CATALINA_HOME/logs/ $CATALINA_HOME/webapps/ $CATALINA_HOME/conf/
$CATALINA_HOME/bin/ holds the executables (for linux and windows).
To startup the Tomcat server use:
% $CATALINA_HOME/bin/startup.sh
To shut it down use:
% $CATALINA_HOME/bin/shutdown.sh
You will want to modify the $CATALINA_HOME/bin/catalina.sh this is a script called by startup.sh its function is to invoke the java process which is the Tomcat server.
Directly under the header these lines are added:
# added by Levente Hajdu ##################################### " export JAVA_OPTS=$JAVA_OPTS" -Xmx512M -Djava.library.path=/usr/lib64 -Djava.awt.headless=true" #############################################################
A description of the options used follows
-Xmx512M sets the memory ceiling on the JAVA VM which runs the server to 512MB this should be sufficient for our needs. Any more consumption over this limit will lead to the Tomcat process being terminated.
-Djava.library.path this sets the library path for an optional set of native (non-JAVA) libraries which Tomcat can utilize for improved performance. If this is not present you will see suggestions to set it in the tomcat log.
Djava.awt.headless=true this line prevents a particular type of crash. This server also hosts the SUMS statistics pages. These use libraries (jFreeChart) to render images for display which have a relation to x-server libraries. If Tomcat is started by a user that has X-forwarding enabled but no server running, Tomcat would crash as it tries to execute the JSP without this line present.
You will be spending a lot of time in $CATALINA_HOME/conf/. The file that controls the Tomcat context paths is $CATALINA_HOME/conf/server.xml. This file requires editing when ever software is deployed at a new context path. Before you edit this file always make a backup. Each year of the shiftlog resides on a different context path. Here is the list:
http://online.star.bnl.gov/apps/shiftLog2003/
http://online.star.bnl.gov/apps/shiftLog2004/
http://online.star.bnl.gov/apps/shiftLog2005/
http://online.star.bnl.gov/apps/shiftLog2006/
http://online.star.bnl.gov/apps/shiftLog2007/
http://online.star.bnl.gov/apps/shiftLog2008/
http://online.star.bnl.gov/apps/shiftLog2009/
The current year is always at:
http://online.star.bnl.gov/apps/shiftLog/
If we look inside the $CATALINA_HOME/conf/server.xml file we will see an entry for each one of these paths:
<!--Shiftlog 2007--> <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" docBase="/var/tomcat/webapps/shiftLog2007.war" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/apps/shiftLog2007" privileged="false" reloadable="true" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper"> <Environment description="" name="year" override="false" type="java.lang.Integer" value="2007"/> <Environment description="" name="isEditable" override="false" type="java.lang.Boolean" value="false"/> <Environment description="" name="runLogLink" override="false" type="java.lang.String" value="http://online.star.bnl.gov/RunLog/Summary.php?run="/> <Environment description="" name="runNumber" override="false" type="java.lang.Integer" value="7"/> </Context>
This is the block of XML for the shiftlog for 2007. With different versions of Tomcat the syntax of this file can change, however it usually doesn’t change too much. Lets go over the important properties in this block:
docBase – Tomcat supports web archive files (.war). This is basically a zip file with a special internal structure. The explanation of the preparation of one of these files would take a whole Drupal page unto its self.
Path – This is the context path at which the site will appear when you look at it over your web browser. It is the part of the url after the server name.
Environment – The environment sub-tag makes information available to the program. The format if fairly simple, However you have to be careful to set the override="false" or else the .war files ./WEB-INF/web.xml will over write these values with its own values.
The environment properties for the shiftlog are:
year – this is the shiftlog year. Example: “2007”
isEditable – this is a boolean value after the run has completed access to the editor is turned off by setting this to false.
runLogLink – This is the url for the run log. The shiftlog uses this to build links to the run log.
runNumber – this is almost the same as the year it’s just the number. Examples:
run 8 = 2008
run 9 = 2009
run 10 = 2010
The $CATALINA_HOME/webapps/ web apps folder holds the default pages that come pre-packaged with the Tomcat server. This is also the location where Tomcat unpacks the war files. The folder naming conventions can change from Tomcat version to Tomcat version.
The $CATALINA_HOME/logs/ directory, as you may have guessed, holds log files. You will want to look over all files in here even if Tomcat would seem to be functioning correctly. The logs can point out errors you many not be aware of. The file $CATALINA_HOME/webapps/catalina.out holds the stander output stream of your JSPs (not to be confused with the HTML output stream) along with Tomcats own stander output stream, making this a handy file for debugging.
To deploy a war file the procedure is as follows:
Stop Tomcat:
$CATALINA_HOME/bin/shutdown.sh
NOTE: If you deploy the tomcat administrative web interface shutting down the whole server is not strictly required because you could just shut down the context path, but I prefer to shut down the whole server as a matter of habit because time required is so short no one really notices.
If this is an upgrade of an existing .war file (else move to step 3), back up the old .war file. All war files are located in /var/tomcat/webapps/ here is the listing of the directory, note the convention for the naming of the web archive files:
-bash-3.00$ ls -1 /var/tomcat/webapps/shiftLog*.war /var/tomcat/webapps/shiftLog2003.war /var/tomcat/webapps/shiftLog2004.war /var/tomcat/webapps/shiftLog2005.war /var/tomcat/webapps/shiftLog2006.war /var/tomcat/webapps/shiftLog2007.war /var/tomcat/webapps/shiftLog2008t.war /var/tomcat/webapps/shiftLog2008.war /var/tomcat/webapps/shiftLog2009.war
When removing one of these files I move it to the /var/tomcat/webapps/old/ directory and rename it following the convention here:
shiftLog2007.Apr03.965628000.war shiftLog2007.Apr04.288184000.war shiftLog2007.Apr09.200079000.war shiftLog2007.Dec03.805483000.war shiftLog2007.Feb07.785336000.war ... shiftLog2007.Mar27.875569000.war shiftLog2007.Nov09.134343000.war shiftLog2007.Nov28.320967000.war shiftLog2007.Nov28.657299000.war
It is important to retain the backup in case there is something wrong with the new .war file, keeping the old one will allow you to roll back whilst the problem is being corrected.
Next copy over the new .war file from the node on which it resides. Scp is the method I use for this. The syntax is:
% scp [username]@[nodeName]:[Path&File]/var/tomcat/webapps/shiftLog[year].war
If this is a new deploy and not an upgrade of an existing .war file you will have to configure a context path in $CATALINA_HOME/conf/server.xml (else move to step 6)
If this is an upgrade you will have to dump (delete) the expanded .war file in $CATALINA_HOME/webapps/ it should be a directory having a name similar to that of the name of the .war file. You do not have to back this up because you already have the .war file backed up.
Startup Tomcat
% $CATALINA_HOME/bin/startup.sh
Open up a web browser and check that the page displays correctly
Run the shift log Java web start application to confirm that the developer has signed his or her jar files within the .war file, if not you will need to have the .war file rebuilt.
Because upgrades are done fairly frequently mostly for request for new features and some bug fixes I keep a script to do the upgrade process listed above, however the script requires modification before running it. The name of the script is $CATALINA_HOME/bin/deploy_year .
If you have done the upgrade but do not notice any change:
checked that you dumped $CATALINA_HOME/webapps/ (step 5)
also dump your web browsers cache
If you get the “page unavailable” message, check that the tomcat process is running. Use the command
ps –ef | grep tomcat | grep java
Even if it is running shut it down and try and restart it again, like an old car Tomcat may not start the first time you try to crank it over.
STAR experts deemed absolutely essential may request to be placed on the expert editor list to edit the ShiftLog directly via the web interface. The user must provide justification for needing to edit the ShiftLog remotely and provide their Kerberos (RCF) user name.
Administrator Notes:
The Tomcat web server will authenticate the user with Kerberos and Tomcat manages the session. We have written the custom module OnlineTomcatRealm.jar to do the authentication which is configured in $CATALINA_HOME/conf/server.xml.
ssh tomcat@online.star.bnl.gov
Edit the file $CATALINA_HOME/conf/tomcat-users.xml
Note: that $CATALINA_HOME may not be defined. However it is wherever Tomcat is installed. In our case this /opt/tomcat
The file looks like this:
<tomcat-users> <role rolename="manager"/> <role rolename="logEditor"/> <user username="jfaustus" roles="logEditor"/> <user username="mephistophilis" roles="logEditor"/> </tomcat-users>
Add a new user with the username and the roles set to "logEditor”.
The restart server:
$CATALINA_HOME/bin/shutdown.sh $CATALINA_HOME/bin/startup.sh
Check that it works and you’re done.