Latest news

Due to maintenance at power stations in the Western Cape, the sites in the Cape Town region are particularly vulnerable to power outages for the foreseable future. Site administrators have been alerted and have been requested to open downtimes as soon as services are affected. Both sites - ZA-CHPC and ZA-UCT-ICTS have adequate UPS, but this depends on the compute load of the jobs at the sites. We cannot guarantee 100 % up time, but will do our best to maintain > 90% reliability.

more information : https://www.capetown.gov.za/en/electricity/Pages/LoadShedding.aspx 

Thanks,
Bruce
Dear VO Manager,

We are revising the overall EGI strategy to lead our activities over the next 5-10 years. The strategy would support the EGI community in realising the 'Open Science Commons' vision that has been recently described in this policy paper: http://www.egi.eu/news-and-media/publications/OpenScienceCommons_v2.pdf.

We would like to hear also your VO community's opinion on how the EGI strategy should look like. We have setup a short, online survey to capture input from you. We would appreciate if you could spend 15 minutes in the next few days to provide input for us through this survey at https://www.surveymonkey.com/r/egi-user-survey-1.

Many thanks.

Gergely Sipos
Technical Outreach Manager, EGI.eu 
e-mail: gergely.sipos@egi.eu 
Dear Partners,

 BG03-NGCC RC will be decommissioned within 2 months after now.
GGUS Ticket #111157 is created.

 The decommissioning timeline is below:

 14.01.2015 - The broadcast message is sent to the affected VO managers and users and the NGI BG manager.
01.02.2015 - The site will enter downtime in which VO users may get their stored data
02.03.2015 - The site status will be changed to SUSPENDED in GOCDB
05.03.2015 - Resource Centre Operations Manager will contact with EGI Operations and EGI CSIRT 
06.03.2015- The RC will be turned off. Log files will be kept for 90 days after that.
04.06.2015-  End of Log retention period. EGI operations team will be notified.
05.06.2015, 18h – Garden party with buffet, Champagne, caviar etc. The best RC admin will be awarded. 

The following RC services registered in GOCDB are affected:

CE: ce02.ngcc.acad.bg
SE se02.ngcc.acad.bg
gLite-APEL: mon.ngcc.acad.bg
Site BDII: sbdii.ngcc.acad.bg

 Please take your necessary data from the SE of the site (se02.ngcc.acad.bg) according to this timeline.

 List of affected VOs:

alice
bg-edu.grid.acad.bg
biomed
biotech.grid.acad.bg
cms
dteam
env.see-grid-sci.eu
esr
meteo.see-grid-sci.eu
mm-comp-chem.grid.acad.bg
national-heritage.grid.acad.bg
new-energy-sources.grid.acad.bg
ops
ops.vo.egee-see.org
see
seegrid
seismo.see-grid-sci.eu

 Regards,

 Vladimir Dimitrov
 (Resource Centre Operations Manager, NGI_BG)
Action: Enable multicore accounting.

Dear NGI managers, Site administrators.

As many EGI user communities are now exploiting multicore hardware it is important for them that accounting correctly reflects the usage made of cores and cpus. At the December Operations Management Board (OMB) it was decided to ask all sites using the APEL client to configure it to publish the number of cores used by jobs.

To enable multicore accounting, you will need to edit the configuration file for the apel parser. This is the software which parses blah logs and batch logs to produce accounting records. The configuration file is usually found at /etc/apel/parser.cfg. In the section labelled [batch], change:

parallel = false
to
parallel = true

This will enable multicore reporting for all future accounting data. Please note that this does not change historical data. Also note that republishing old data is not sufficient to show multicore information - the log files will need to be reparsed. If you wish to republish old data with multicore enabled, please open a GGUS ticket with the APEL team so that we can help you with the process.

If you use the SGE parser, please be aware that it only reports on the number of processors used in a job. It does not report the number of nodes. If you know how to get around this limitation, then please get in touch with the APEL team at apel-admins@stfc.ac.uk.

The multicore accounting data can currently be seen here
http://accounting-devel.egi.eu/show.php?
Drill down to your site and select the grouping  "Show data for: Submitting Host" as a function of: "Number of Processors" Values of 0 mean the parallel option was false when the data were published. The Submitting Host is a new feature in the accounting portal which lets a site see in more detail which CEs are publishing.

Stuart Pullinger
APEL Team Leader

Best regards,
EGI Operations Support Team
The problems affecting access to services at the RAL Tier1 have been resolved. Services were unavailable from 10:30 until 11:40 local time (=UTC) this morning (24th December).
We are investigating a networking problem that is affecting access to services at the RAL Tier1 (including GOC DB).

>>> More news <<<

The APEL repository problem has been fixed. The summaries used by the accounting portal have now been brought up to date. 

The next time the portal updates you should see data received up to today. 

The APEL Team apologise for any inconvenience caused by this break in service.
Over the weekend there has been an error in the APEL repository during the preparation of data to send to the Accounting Portal. This is under investigation. 

Sites are assured that there were no problems receiving data from sites. This is all stored safely in the repository and will appear in the portal soon after the problem is resolved. 
Dear all
there has been a change to the DNS server (OS upgrade) which hosts the domain sagrid.ac.za
There may be some disruption in name resolution while the change propagates, although we don't expect there to be. 

Please let us know if you experience any issues.
Dear Users,


The Operations Portal  3.1.1 is now online.

The hightlights of this release are: 
-  the removal of the validation step for the VO ID card updates
- the management of tickets for GRIDOPS sites

For the full list of features and improvements, please read the release note :
http://operations-portal.egi.eu/home/tasksList


Don't hesitate to contact us for comments, feedback, bugs at 
cic-information@in2p3.fr

Regards ,
Operations Portal Team
Dear colleagues,

please take note of the following items concerning the switch to the
new VOMS servers:

UI configuration
================

As of late Nov the old VOMS services (lcg-voms.cern.ch and voms.cern.ch)
cannot be used for VOMS proxies anymore.  It is thus desirable to remove
any references to those services from the VOMS client configuration on
User Interface hosts.  The configuration files are by default located
in the /etc/vomses directory.

Note: the YAIM configuration tool will only _create_ such files according
to its configuration - stale files will have to be _manually_ removed.

The VOMS clients will skip over services that are not available (anymore),
but in such cases the user may get confusing error messages.  Furthermore,
the client may run into a timeout before reaching a service that works.

For these reasons it is best to fix stale client configurations soon.

The routers in the CERN Computer Center have been temporarily configured
such that also external clients may quickly skip over the old services,
but this arrangement will be stopped on Tue Feb 3, 2015.

Grid-mapfile configuration
==========================

One of the old services (voms.cern.ch) was also being used for creating
various types of grid-mapfiles still used by some grid services.

At the time of this broadcast that functionality (VOMS-Admin) is still
available until Tue Feb 3, 2015.

Grid-mapfiles are typically created by the edg-mkgridmap utility that is
robust against a VOMS server being temporarily unavailable for some VO:
in such cases it will not delete any DN for the affected VO.

In the current case, however, the given VOMS server will not come back
and therefore should instead be _removed_ from the configuration.

LSC files
=========

LSC files referring to the old VOMS servers are simply ignored;
it is therefore not needed to remove them explicitly.

VOMS configuration details
==========================

The details for the new situation are described on this page:

https://twiki.cern.ch/twiki/bin/view/LCG/VOMSLSCfileConfiguration
Site UNI-DORTMUND will update all worker nodes to SL6. The CreamCE udo-ce06.grid.tu-dortmund.de with SL5 worker nodes will be decommissioned. A downtime has been declared. Please switch to the new CreamCE udo-ce08.grid.tu-dortmund.de running SL6 and UMD3 software. 
Dear all,

EUGridPMA have announced a new set of CA rpms. Based on this IGTF release a new set of CA RPMs have been packaged for EGI. 

Please upgrade within the next six days at your earliest convenience. When this timeout is over, SAM will throw critical errors on CA tests if old CAs are still detected.

Please check https://wiki.egi.eu/wiki/EGI_IGTF_Release for more details

EGI TSA1.3 Team

ReleaseNotes:

European Grid Infrastructure EGI Trust Anchor release 1.61          2014.12.01

------------------------------------------------------------------------------
   For release DOCUMENTATION available on this EGI Trust Anchor release see   
               https://wiki.egi.eu/wiki/EGI_IGTF_Release                      
------------------------------------------------------------------------------

Modifications compared to the previous release:
* updated to IGTF Accredited CA distribution version 1.61-1 Classic, SLCS and 
  MICS profiles, encoded in meta-package "ca-policy-egi-core-1.61-1" (for new 
  installs) and "lcg-CA-1.61-1" (for sites upgrading from EGEE/LCG releases).

* Location of the repository changed to repository.egi.eu.  See documentation
  for details and the updated repo files.

* Your may install BOTH the "egi-core" AND "lcg" meta-packages,  according to
  your  policies.   Note that  your organisation or  NGI may have  a specific 
  policy and may have added or removed CAs compared to the EGI core policy.

The following notices are republished from the IGTF and EUGridPMA, inasfar 
as pertinent to this release.  More information can be found in the 
EUGridPMA newsletter (see https://www.eugridpma.org/):

Changes from 1.60 to 1.61
-------------------------
(1 December 2014)

* Added new IPv6-capable crl_url entries for NCSA and CILogon CAs (US)
* Added accredited TSU (Georgia) CA (GE)
* Extended life time and updated digest function of AustrianGrid CA (AT)

The CA modifications, encoded in both "requires" and "obsoletes" clauses, have
been incorporated in the above-mentioned meta-package RPMs. This trust anchor 
release is best enjoyed with fetch-crl v3 or better, available from popular 
GNU/Linux OS (add-on) repositories Fedora, EPEL, Debian, and from the IGTF.

Version information: ca-policy-egi-core = 1.61-1
Dear LHC experiment and Ops VO managers,
please remove the obsolete VOMS servers from the
"VOMS information" (!) section in your VO card:

http://operations-portal.egi.eu/vo/view/voname/alice
http://operations-portal.egi.eu/vo/view/voname/atlas
http://operations-portal.egi.eu/vo/view/voname/cms
http://operations-portal.egi.eu/vo/view/voname/lhcb
http://operations-portal.egi.eu/vo/view/voname/ops

This should take only a few minutes of your time,
while making a significant difference for sites, thanks!

Note: _VOMRS_ remains valid on lcg-voms.cern.ch.
Dear VO Managers,

IDGF Operations Center is operational.

We would like to announce that IDGF is now fully operational 
and that they have finished its integration procedure. All necessary operational teams and tools are established in IDGF Operations Centre and they are ready for production. 
This Operations Centre is visible in all operational tools as IDGF and is responsible for all IDGF sites.

Best regards,
EGI Operations Support Team