- General information
- Data readiness
- Grid and Cloud
- Articles and publications
- Cloud computing
- Data Management
- Documentation
- Getting site information from VORS
- Globus 1.1.x
- Globus Toolkit Error FAQ
- Intro to FermiGrid site for STAR users
- Introduction to voms proxies for grid cert users
- Job Managers
- Modifying Virtual Machine Images and Deploying Them
- Rudiments of grid map files on gatekeepers
- Running Magellan Cloud at NERSC, Run 11
- SRM instructions for bulk file transfer to PDSF
- Scalability Issue Troubleshooting at EC
- Specification for a Grid efficiency framework
- Starting up a Globus Virtual Workspace with STAR’s image.
- Troubleshooting gsiftp at STAR-BNL
- Using the GridCat Python client at BNL
- Grid Infrastructure
- Grid Production
- Monitoring
- MySQL project activities
- Infrastructure
- Machine Learning
- Offline Software
- Production
- S&C internal group meetings
- Test tree
Running Magellan Cloud at NERSC, Run 11
Updated on Wed, 2011-02-16 13:37 by balewski. Originally created by jeromel on 2011-01-10 10:30.
Under:
Running Magellan Cloud at NERSC
Fig 1. General scheme of allocation of resources and connections between machines
Fig 2. Specific implementation of the propagation of STAR DB snapshots, external DB monitoring is abandoned
Task | In Charge | To Do | Blocks | ERT |
---|---|---|---|---|
Increase /mnt to 100GB | Iwona | Check if possible and reconfigure Eucalyptus | Done | |
Establish # and target to scp data from | Iwona | Test scp from /global/scrtach to VM against possible carver nodes supporting /global/scratch. | Move Eucalyptus to the set of public IPs registered in DNS. | 2011/01/14 |
Integration of transfer of DAQ file using FastOffline workflow | Jerome | Current scheme restore 100 files max every 6 hours. Need transfer from BNL->Cloud+delete files etc ... | None | 2011/02/08 |
»
- Printer-friendly version
- Login or register to post comments