Advanced Research Computing offer a premium, ‘resource as a service’ option for research groups who need additional storage (above the default, free allocation), BEAR Cloud or dedicated or priority access to BEAR's compute services.
This relieves research groups of the infrastructure support and sys admin effort as well as allowing the University to buy cost effectively by aggregating needs. It also ensures the services needed to support the research are delivered on highly reliable, enterprise class equipment with multiple resilience features hosted in our efficient, water-cooled data centre to maximise availability and reduce carbon emissions, disruption and maintenance. This also radically reduces lead-times for standard requirements.
For such purchases, compute and storage resources are charged at a rate equivalent to the core component costs (including the essential licensing and warranties). Except in the case of large requirements, the costs for all supporting infrastructure (the data centre itself, management devices, racks, enclosures, controllers, switches, power and network cabling etc.) are usually met by IT Services.
Compute Resource
We offer two options where a research group wishes to pay for premium, non-standard compute access: Priority Access to Compute Resources and Dedicated Compute Resources.
Priority Access to Compute Resources
Advanced Research Computing's preferred mechanism is Priority Access to BlueBEAR resources (including CaStLeS for life science research). This option allows better overall utilisation of BlueBEAR than dedicated resources, and should lead to all BlueBEAR users finding queue times remain low. The main benefit is that the physical resources Advanced Research Computing can purchase using the funds are available to all BlueBEAR users, as they are added to the general shared resource.
In this model all members of nominated BEAR projects can submit jobs to BlueBEAR with an increased priority - meaning they effectively start off nearer (or at) the start of the queue. This leads to very short queue times. If a research group has many members, all of whom use the priority queue, then BlueBEAR's scheduler's fair-share system mean that queue times will increase (although still less than they would without priority access).
The following table shows the standard priority options, including the additional CPU and RAM limits on top of those available in the shared queue. I.e. you can run more and/or larger jobs in the priority queues compared to the shared queues, and priority jobs will start faster than they would in the shared queues. Please note that large jobs (i.e. those taking a significant fraction of the CPU or RAM limit), or jobs requesting one or more complete nodes, will still queue for enough time for the scheduler to make space for them. Each higher priority level gives jobs a higher priority - so priority 2 jobs will start (in general) before priority 1 jobs, if they are submitted simultaneously.
NOTE: The specific implementation details of how the different priorities interact are subject to review and change, as we monitor usage and maintain fair access at the different levels.
Priority CPU Access
| Additional CPU limit | Additional RAM limit | Years | Price |
Priority 1 (5 years) |
40 cores |
0.25 TB |
5 |
£4,000 |
Priority 1 (per year) |
40 cores |
0.25 TB |
1 |
£1,000 |
Priority 2 (5 years) |
80 cores |
0.5 TB |
5 |
£8,000 |
Priority 2 (per year) |
80 cores |
0.5 TB |
1 |
£2,000 |
Priority 3 (5 years) |
120 cores |
0.75 TB |
5 |
£12,000 |
Priority 3 (per year) |
120 cores |
0.75 TB |
1 |
£3,000 |
... Priorities 4 to 9 ... |
|
|
|
|
Priority 10 (5 years) |
400 cores |
2.5 TB |
5 |
£40,000 |
Priority 10 (per year) |
400 cores |
2.5 TB |
1 |
£10,000 |
As of July 2024 the standard, free limit is 1344 cores per user at any one time. So the maximum "Priority 3" job would be 1344 + 120 = 1464 cores, for example - or a user could run 1464 simultaneous single-core jobs, etc.
Priority GPU Access
| Additional GPU limit | Price |
GPU Priority 1 (5 years) |
1 |
£12,000 |
GPU Priority 1 (per year) |
1 |
£3,000 |
GPU Priority 2 (5 years) |
2 |
£24,000 |
GPU Priority 2 (per year) |
2 |
£6,000 |
GPU Priority 3 (5 years) |
3 |
£36,000 |
GPU Priority 3 (per year) |
3 |
£9,000 |
... GPU Priorities 4 to 9 ... |
|
|
GPU Priority 10 (5 years) |
10 |
£120,000 |
GPU Priority 10 (per year) |
10 |
£30,000 |
As of July 2024 the standard, free limit is 4 GPUs per user at any one time. So the maximum "GPU Priority 3" job would be 4 + 3 = 7 GPUs, for example - or a user could run 7 simultaneous single-GPU jobs, etc.
These prices are subject to regular review, and the prices listed above may be out of date. To find out more about this service, including confirmed pricing, please file an Other BEAR Request to let us know what you need and we will advise.
Priority access is only available for our standard CPU and GPU compute resources. If you need non-standard resources then you may need the "Dedicated Compute Resources" option below.
Dedicated Compute Resources
Alternatively, a research group can purchase dedicated compute resources. In this case the research group gets (virtually) dedicated access to the paid-for resource for a period of 5 years. For efficiency, we allow very short jobs from other BlueBEAR users (less than 10 minute) to run on these resources. This means that there could be a delay of up to 10 minutes for jobs to start, although this is often zero in practice.
To find out more about this service, including indicative pricing for the options that are available, please file an Other BEAR Request to let us know what you need and we will advise.
BEAR Cloud (Virtual Machines)
BEAR Cloud is an onsite private cloud operated as part of the wider BEAR service offering, dedicated to supporting computationally or data intensive research. BEAR Cloud virtual machines, like all BEAR premium resources, are usually paid-for options, and are available when BlueBEAR is not a suitable option for the research.
Storage
Resource purchases are again based on a five year cycle and take account of our ability to relegate little accessed data and archive material to much more cost effective media like tape. Policy and pricing for storage can be found in the Research Data Service section. Storage is provided on the RDS.
Timescales
One year before the agreed end date of funded resources, the registered PI for the purchasing group will be notified in order to allow the group to review its requirements and make arrangements for any future needs.
The maximum length of time for paid-for resources is five years. This refresh cycle is roughly in line with the life expectancy of the equipment and the pace of change in the technology which means four year old kit is starting to become obsolete, incompatible with needs and even uneconomic to operate. The Advanced Research Computing Team uses the central investment in BEAR to smooth this cycle and maintain service but must withdraw resource at the end of the five year term. Service Extensions may be bought at any point from the start of the 2nd year in one-year increments[1]. The cost is based on the framework pricing for the equivalent node/resource, current at the time an order is raised. This will ensure compatibility with the evolving BEAR infrastructure and that the spec of the resource will at least match (and possibly exceed) what was originally purchased.[2]
[1] Note, at any point in time, you cannot hold more than 5 years future provision. This is essential to set a practical planning window, based on IT Service’s funding and strategy for research computing.
[2] Where a funder mandates the purchase of physical hardware rather than an equivalent ‘resource as a service’ e.g. for some EU projects, an accommodation may be possible. Please log a call via the IT Service Desk to discuss your constraints and possible BEAR solutions.