Dell EMC on Tuesday introduced its new-generation rugged PowerEdge XR servers, designed to work in harsh environments of industrial sites or even conflict zones. The new machines are equipped with Intel’s latest Xeon Scalable (Skylake-SP) processors along with up to 512 GB DDR4 memory and up to 30 TB of solid-state storage. In addition, the new PowerEdge XR2 features a special common access card that provides an additional way of authentication to drive data encryption and can instantly render the machines useless to the enemy once removed.

As computing becomes pervasive, servers get needed everywhere, including severe environments, such as industrial sites, rural areas, conflict zones and others. Dell is among a few companies that offer ruggedized laptops, tablets, and other hardware, so the company was among the first to address such needs. Dell’s first-gen rugged servers were custom-built machines based on the company’s PowerEdge servers and were sold to select customers. Because of growing demand for such machines and because Dell realized that servers for harsh environments have to be designed for extreme conditions from scratch, the company introduced its first “official” rugged server called the PowerEdge R420xr back in 2014. Today, the company is launching its second-generation purpose-built rugged server (which really is the company’s third-gen rugged server platform) — the PowerEdge XR2.

Just like the predecessor, the Dell EMC PowerEdge XR2 comes in 1U 20” deep chassis that features shock and vibration resistance, optional dust filtration and uses components certified to work in low and high temperatures (from -5ºC to +55ºC) as well as altitudes of up to 15K feet* (think specialized DRAM modules, SSDs, other chips). Meanwhile, the new machine is completely different from the system launched over three years ago and brings huge performance improvements.

The PowerEdge XR2 is based on two Intel’s Xeon Gold (Skylake-SP) processors with up to 22 cores, 30 MB L3 cache and a 140 W TDP. Power consumption is a concern for ruggedized servers because of cooling, so Dell decided to stick to CPUs with a moderate TDP (after all, 44 cores is a lot). The primary CPU socket can be equipped with 10 DIMMs (4 channels at 2 DPC and 2 channels at 1 DPC), whereas the secondary CPU socket can support six DDR4 memory modules (6 channels at 1 DPC), for a potential 512 GB total DRAM per box (previously only 384 GB was supported by the R420xr). Such an unorthodox memory configuration may look a bit odd and physically Intel’s latest Xeon Scalable CPUs can support more memory, but for its rugged servers Dell intends to use certified DIMMs made for severe environments and their capacity is being limited to 32 GB per module today.

The new storage sub-system is something that Dell must be proud of: the PowerEdge XR2 supports eight hot-swappable SATA/SAS SSDs featuring a total capacity of up to 30 TB capacity (up from 6.4 TB on the previous-gen model). Optionally, Dell may equip the system with self-encrypting SSDs, but by default the machine will encrypt the drives itself and will require Dell’s common access card to access/decrypt them. Once such card is removed, the drives cannot be accessed by unauthorized personnel or enemy, which will come in handy in various conflict zones. Depending on the needs, the PowerEdge may be equipped with a variety of RAID controllers.

Two CPUs, multiple memory modules, and eight high-end SSDs consume a substantial amount of power. Dell equips the PowerEdge XR2 with a 550 W redundant PSU, which will feed the aforementioned components. Meanwhile, due to volume restrictions, TDP limits, and power constraints, the new machine will not support any accelerators, just like its predecessor. This may be a pity for oil and gas exploration applications many of which rely on NVIDIA’s Tesla accelerators, but typically oil and gas companies can afford to build custom hardware for their exploration needs.

Depending on exact configuration, the Dell PowerEdge XR2 machines can be equipped with 1Gb, 10Gb BASE-T, 10Gb SFP+, and 40Gb QSFP+ networking cards. The XR2 machines are IPMI compliant and can support Dell’s proprietary iDRAC9 remote management. As for operating systems, expect the machine to be compatible with Microsoft’s Windows Server as well as various Linux builds.

Dell PowerEdge XR2: General Specifications
  PowerEdge XR2 1U/20"
CPU Two Intel Xeon SP processors
Up to 140 W TDP (each):

Intel Xeon Gold 6152
Intel Xeon Gold 6140
Intel Xeon Gold 6138
Intel Xeon Gold 6132
Intel Xeon Gold 6130
Intel Xeon Gold 6126
Intel Xeon Gold 5122
Intel Xeon Gold 5120
Intel Xeon Gold 5120T
Intel Xeon Gold 5118
Intel Xeon Silver 4116
Intel Xeon Silver 4114
Intel Xeon Silver 4112
Intel Xeon Silver 4110
Intel Xeon Silver 4108
Intel Xeon Bronze 3106
Intel Xeon Bronze 3104
Chipset unknown
RAM Up to 512 GB DDR4-2667 RDIMMs with ECC
CPU1: 10 DIMMs, 4 ch at 2 DPC, 2 ch  at 1 DPC
CPU2: 6 DIMMs, 6 channels at 1 DPC
8 GB, 16 GB, 32 GB modules supported
Storage Controllers PERC H330
PERC H730p
PERC HBA330
Storage 2.5" SATA/SAS: up to hotplug 8 drives, 30 TB capacity
M.2 SATA: 2 drives for fast boot, OS

Up to three drive form-factor storage devices.
Expansion 2 × PCIe x16 for NICs
Networking Integrated Broadcom 5720 2 x 1GbLOM
plus optional LOM Riser

LOM Riser Options:
Broadcom 5720 2×1Gb
Broadcom 57416 2×10Gb Base-T
Broadcom 57416 2×10Gb SFP+
I/O Front ports:
- D-Sub, eSATA, USB 2.0,
- Dedicated iDRACDirect Micro-USB
Rear ports:
- D-Sub, RS232, 2×USB 3.0,
- Dedicated iDRAC network port
Embedded Management IPMI 2.0 compliant
iDRAC9 with Lifecycle Controller (Express, Enterprise)
Quick Sync 2 wireless module optional
Security TPM 1.2/2.0 optional
Cryptographically signed firmware
Secure Boot
System Lockdown
Secure Erase
Integrated Common Access Card Reader
Harsh Environment Testing MIL-STD-810G (temp, shock, vibration, altitude, sand/dust)
MIL-STD-461G for conductive/radiative immunity
DNV-GL for temperature, humidity, vibration, EMC
PSU 550W redundant

Dell EMC will sell its PowerEdge XR2 machines directly as well as through resellers and OEMs who may adapt them for particular needs by loading software and perform other customizations. OEMs can also install their own bezels and load BIOS with their logotypes and/or features.

*Formally, the machines are compliant with MIL-STD-810G and MIL-STD461G requirements for temperature, shock, vibration, altitude, and conductive/radiative immunity, as well as DNV, IEC 60945 requirements for maritime navigation and radiocommunication equipment when it comes to temperature, humidity, vibration, and EMC.

Related Reading

Source: Dell

Comments Locked

13 Comments

View All Comments

  • DanNeely - Wednesday, December 6, 2017 - link

    How do these handle dust. I'm assuming they're not fanless, so do they just have super-oversized cooling to handle the extra insulation; or are they gambling on customers remembering to clean filters regularly?
  • HStewart - Wednesday, December 6, 2017 - link

    A machine like this is typically in a special server room, which is extremely cool and clean. This has been done this way for decades - I remember working in IBM Mainframe center in late 80's and you need a coat.

    This is not just Intel, AMD EPYC servers have fans also - also just like my Supermicro Xeon workstation from 10 years ago - they have redundant fans and redundant powersupples

    Intel's does have 16 Core C3958 which I believe is fan less - but like the power of this server
  • DanNeely - Wednesday, December 6, 2017 - link

    These are rough service/industrial models intended for use outside of the nice data centers normal servers live in.
  • HStewart - Wednesday, December 6, 2017 - link

    This machine has option filters - and appears they have extensive support system - I would think just like the fans and power supply that the filters will be plug and play

    https://www.anandtech.com/Gallery/Album/6011#6

    there likely a manual some where that has information on how deal with filters
  • Notmyusualid - Thursday, December 7, 2017 - link

    @ HStewart

    *Only* a coat? Welcome to my world. I've had hat, coat, and scarf on from 10am this morning to 8pm this evening.

    Its odd going outside to get warm.

    Never-mind the fan noise...
  • Holliday75 - Thursday, December 7, 2017 - link

    That cold? My last job in a DC our cold aisle temps were 68-72 degrees Fahrenheit. I just wore jeans and a Columbia zip up and was fine.....step into a hot aisle for an extended period and I took it off since it was usually in the 80's.
  • ddrіver - Friday, December 8, 2017 - link

    HStewart, does "designed to work in harsh environments of industrial sites or even conflict zones" suggest a clean room environment? You don't need a rugged server in a normal datacenter.
  • ilt24 - Wednesday, December 6, 2017 - link

    If your server is being uses in a 'dirty' environment, then cleaning/replacing the filter(s) needs to be a preventive maintenance task. Most rack servers are setup to suck air in the from and toss it out the back. I can't tell from the pictures in the article, but it wouldn't surprise me if the face plate contains an air filter.
  • Jorsher - Tuesday, December 12, 2017 - link

    Being someone that has worked in dusty environments for years and one of the exact situations these servers target -- you have to remember to clean them. Honestly, though, I've tore down some setups with equipment that has been running for years without issue. Moving them from one location to another to wipe them, a lot of dust is loosened and booting back up it looks cartoonish the dust cloud that shoots out.

    This equipment generally isn't shut off intentionally once put into production, but I've not seen an issue yet where the dust has caused problems, despite knowing it's drilled into your head how important it is to clean it. Outside is dusted, inside is generally left alone unless something occurs and it's offline.
  • damianrobertjones - Wednesday, December 6, 2017 - link

    The moment a Dell server reaches a certain age... Dell could really do with an option to remove the stupid firmware white list that excludes a PAYING customer using hard drives that are not from Dell.

    The restriction is ridiculous (but I do understand why they have to do this).

    Plus the prices of their hard drive options is mind blowing.

Log in

Don't have an account? Sign up now