Tuesday 30 July 2019

New AI Tookit Powers Large-Scale Deep Learning for Medical Imaging

The new NVIDIA Clara AI Toolkit enables developers to build and deploy medical imaging applications to create intelligent instruments and automated healthcare workflows.


In today’s hospitals, medical imaging technicians are racing to keep pace with workloads stemming from the growing use of CT scans, MRI scans and other imaging used in the diagnostic processes. In a large hospital system, a relatively small number of technicians might be hit with hundreds or even thousands of scans in a single day. To keep up with the volume, these overworked technicians need tools to assist with the process of analyzing complex images, identifying hard-to-detect abnormalities and ferreting out indicators of disease.

Increasingly, medical institutions are looking to artificial intelligence to address these needs. With deep-learning technologies, AI systems can now be trained to serve as digital assistants that take on some of the heavy lifting that comes with medical imaging workflows. This isn’t about using AI to replace trained professionals. It’s about using AI to streamline workflows, increase efficiency and help processionals identify the cases that require their immediate attention. Hospital IT needs to strategize to make their infrastructure AI-ready. NVIDIA and American College of Radiology have partnered to enable thousands of radiologists to create and use AI in their own facilities, with their own data, across a vast network of thousands of  hospitals.

One of these AI-driven toolsets is NVIDIA Clara AI, an open, scalable computing platform that enables development of medical imaging applications for hybrid (embedded, on-premises or cloud) computing environments. With the capabilities of NVIDIA Clara AI, hospitals can create intelligent instruments and automated healthcare workflows.

The Clara AI Toolkit


To help organizations put Clara AI to work, NVIDIA offers the Clara Deploy SDK. This Helm-packaged software development kit encompasses a collection of NVIDIA GPU Cloud (NGC) containers that work together to provide an end-to-end medical image processing workflow in Kubernetes. NGC container images are optimized for NGC Ready GPU accelerated systems, such as Dell EMC PowerEdge C4140 and PowerEdge R740xd servers.

The Clara AI containers include GPU-accelerated libraries for computing, graphics and AI; example applications for image processing and rendering; and computational workflows for CT, MRI and ultrasound data. These features leverage Docker and Kubernetes to orchestrate medical image workflows and connect to PACS (picture archiving and communication systems) or scale medical instrument applications.

Dell EMC Study Materials, Dell EMC Online Exam, Dell EMC Certifications, Dell EMC Artificial Intelligence
Fig 1. Clara AI Toolkit architecture

The Clara AI Toolkit lowers the barriers to adopting AI in medical-imaging workflows. The Clara AI Deploy SDK includes:

◈ DICOM adapter data ingestion interface to communicate with a hospital PACs system

◈ Core services for orchestrating and managing resources for workflow deployment and development

◈ Reference AI applications that can be used as-is with user defined data or can be modified with user-defined-AI algorithms

◈ Visualization capabilities to monitor progress and view results

Server Platforms for the Toolkit


For organizations looking to capitalize on NVIDIA Clara AI, Dell EMC provides two robust, GPU-accelerated server platforms that support the Clara AI Toolkit.

The PowerEdge R740xd server delivers a balance of storage scalability and performance. With support for NVMe drives and NVIDIA GPUs, this 2U two-socket platform is ready for the demands of Clara AI and medical imaging workloads. The PowerEdge C4140 server, in turn, is an accelerator-optimized, 1U rack server designed for most demanding workloads. With support for four GPUs, this ultra-dense two-socket server is built for the challenges of cognitive workloads, including AI, machine learning and deep learning.

In the HPC and AI Innovation Lab at Dell EMC, we used Clara AI Toolkit with CT Organ Segmentation and CT Liver Segmentation on our GPU-accelerated servers running Red Hat Enterprise Linux. For these tests, we collected abdominal CT scan data, a series of 2D medical images, from the NIH Cancer Image Archive. We used the tools in the Clara AI Toolkit to execute a workflow that first converts the DICOM series for ingestion and identifies individual organs from the CT scan (organ segmentation).

Next, the workflow can use those segmented organs as input to identify any abnormalities. Once the analysis is complete, the system creates a MetaIO annotated 3D volume render that can be viewed in the Clara Render Server, and DICOM files that can be compared side by side with medical image viewers such as ORTHANC or Oviyam2.

Dell EMC Study Materials, Dell EMC Online Exam, Dell EMC Certifications, Dell EMC Artificial Intelligence
Fig 2: Oviyam2 Viewer demonstrating side by side view of Clara AI Processed vs Original CT Scan

Clara AI on the Job


While Clara AI is a relatively new offering, the platform is already in use in some major medical institutions, including Ohio State University, the National Institutes of Health and the University of California, San Francisco, according to NVIDIA.

The National Institutes of Health Clinical Center and NVIDIA scientists used Clara AI to develop a domain generalization method for the segmentation of the prostate from surrounding tissue on MRI. An NVIDIA blog notes that the localized model “achieved performance similar to that of a radiologist and outperformed other state-of-the-art algorithms that were trained and evaluated on data from the same domain.”

As these early adopters are showing, NVIDIA Clara AI is a platform that can provide value to organizations looking to capitalize on AI to enable large-scale deep learning for medical imaging.

Dell EMC Study Materials, Dell EMC Online Exam, Dell EMC Certifications, Dell EMC Artificial Intelligence
Fig 3: Ailments on segmented liver identified by Clara AI Toolkit

Saturday 27 July 2019

Who Will You Trust to Unlock the Value of Your Data Capital?

Vanson Bourne and Dell EMC recently collaborated on the third installment of the Global Data Protection Index (GDPI) project – based on a survey of 2,200 IT Decision Makers across 18 countries – to better understand:

◈ The maturity of their data protection strategies
◈ The value they place on data
◈ How prepared they are during this era of rapid technological change

The study also generated a maturity index designed to illustrate the characteristics of leaders vs. laggards. After reviewing the findings in detail, it’s clear to me that we are collectively experiencing a fundamental shift in the way that we perceive data.

To provide context, in the early days of the digital economy we generally viewed data as a byproduct of the broader business. This may have included customer databases, archived documents, and applications necessary to “keep the lights on.” As the data deluge grew, new requirements took shape – storage capacity had to quickly expand, heating and cooling systems were constantly under strain, and IT was tasked with managing a complex and ever-expanding data lake.

This explosion of data isn’t slowing down.

According to the GDPI research, the volume of global data rose from 1.45PB in 2016 to 9.7PB in 2018 – an increase of 6X. While it may be intuitive to continue categorizing this rapid data growth as a challenge that needs to be managed, it’s becoming more apparent that this accumulated data contains untapped value. According to the study, 92% see the potential value of data while 36% consider data to be extremely valuable.

Data Center, Dell EMC, Storage, Data Protection, Features, Dell EMC Online Exam, Dell EMC Study Materials

Not too long ago, organizations began to realize they could deliver more value leveraging data analytics – enabling the proliferation of convenient customer-centric features such as customized recommendations provided by Netflix and Amazon. However, this shift in the perception of data is about much more than data analytics. In fact, many modern business models are entirely based on data – where data is the product. The most obvious examples are social media platforms and search engines, where our own preferences and behavior are what is monetized. Data has quickly become the most valuable asset for countless organizations globally, and it is also the fuel that drives the next generation of technologies and workloads such as IoT and Artificial Intelligence.

At Dell Technologies, we use the term Data Capital to describe the wealth or value that has been unlocked from data. According to the GDPI study, data protection maturity leaders have 18X more data under management compared to laggards. With the heightened value that has been placed on data, and the sheer volume of data necessary to establish and maintain a competitive advantage, naturally there should be a corresponding increase in value placed on the IT infrastructure solutions responsible for storing, managing, and protecting that data.

At this point we all know how important it is for mission critical data and applications to be safe and available, so it’s important to partner with a vendor you can trust. According to the GDPI survey data, organizations with multiple vendors are 27% more likely to have experienced unplanned system downtime and 43% more likely to have experienced data loss compared to those with one vendor. Furthermore, the average cost of downtime is more than double and average cost of data loss is almost double for those with multiple vendors.

Data Center, Dell EMC, Storage, Data Protection, Features, Dell EMC Online Exam, Dell EMC Study Materials

Dell Technologies is the only end-to-end infrastructure provider with both an extensive storage and data protection portfolio (and much more). Working with one trusted vendor across both storage and data protection means you’re utilizing technology that is designed with compatibility as a top priority – from the purchase process through deployment and ongoing management. You also receive world-class customer support from one global support organization, and peace of mind through the Future-Proof Loyalty Program – which covers both storage and data protection solutions.

Thursday 25 July 2019

When it Comes to Ransomware, the Best Offense is a Good Defense

The need for cybersecurity awareness and preparedness is once again top of mind as companies across the globe are reeling after the WannaCry ransomware attack last month, and now the NotPetya ransomware attack just last week (also referred to as Petya or Goldeneye).


We have been speaking to numerous customers since the attacks and all are trying to understand what more they can be doing to protect themselves. Unfortunately, malware variants like ransomware are not going to disappear anytime soon. In fact, according to the Department of Justice, 4,000 ransomware attacks happen daily, which adds up to 1,460,000 attacks a year, millions of dollars on the line and numerous amounts of your data that could potentially be compromised.

In cybersecurity, the best offense is a good defense.

Threats evolve quickly and it is imperative that organizations implement a multi-faceted security approach that can effectively stop evolving threats. While there is no silver bullet for complete endpoint and data security protection, there are many solutions available today that can significantly help protect against threats and keep critical data secure. For those looking to protect themselves going forward, Dell has several security products available that can help.

The most important solution that organizations need to consider is an advanced threat prevention solution to identify malicious threats and stop them before they can run. There are many solutions available today, but they’re not all created equal. Many traditional anti-virus solutions are based on legacy technology – and legacy threats – of 20 years ago when the number of malware variants were measured in the thousands per year, not hundreds of thousands per day. Signature-based anti-virus solutions have had a declining efficacy for years precisely because they can’t keep up with the multitude of variants out there, and nor can they effectively protect against advanced threats such as zero day attacks.

Dell EMC can help.

Dell Endpoint Security Suite Enterprise integrates Cylance technology that employs artificial intelligence and mathematical models to protect against the execution of advanced persistent threats and malware including zero day attacks and targeted attacks such as ransomware. This solution stops up to 99 percent of malware and advanced persistent threats, far above the average 50 percent efficacy rating of many traditional anti-virus solutions. The suite combines data encryption with advanced threat prevention to protect data – so that if something does happen, the files are encrypted.

An advanced threat prevention solution is only one step. In our blog post about the WannaCry issue last month, we talked about the need to keep the software that you have in place updated and deploy all patches promptly. This is how the WannaCry attack occurred and became so widespread – the worm took advantage of a vulnerability in older versions of Windows, and the attackers bet that many organizations had not deployed the patch that was provided a few months prior. NotPetya is different in that it used more than one way to infiltrate systems and propagate itself, but one of the ways that it spread was through this same vulnerability. This demonstrates that known vulnerabilities will continue to be exploited because many organizations do not deploy patches in a timely manner –something that we’ll explore in greater detail in a future post.

Because attacks will happen, it is critical to have backup and recovery in place as well. One to look at is Mozy by Dell – a secure, cloud data protection solution for laptops, desktops and small servers across a distributed enterprise for easy recovery from data loss incidents like ransomware attacks. This way, if you are breached, you can recover your data on your own terms and it’s not lost forever. For enterprises, Dell EMC recovery solutions including storage-based replication and data protection solutions can also help recover business critical systems at the data center.

Tuesday 23 July 2019

New Server Hits the Machine-Learning Track

Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Online Exam

The new Dell EMC DSS 8440 server accelerates machine learning and other compute-intensive workloads with the power of up to 10 GPUs and high-speed I/O with local storage.


As high-performance computing, data analytics and artificial intelligence converge, the trend toward GPU-accelerated computing is shifting into high gear. In a sign of this momentum, the TOP500 organization notes that new GPU-accelerated supercomputers are changing the balance of power on the TOP500 list. This observation came in 2018 when a periodic update to the list found that most of the new flops came from GPUs instead of CPUs.

This shift to GPU-accelerated computing is having a major impact on the HPC market. IDC projects that the accelerated server infrastructure market will grow to more than $25 billion by 2022, with the accelerator portion accounting for more than half of that volume.

“With AI manifesting itself in the datacenter and the cloud at a phenomenal rate and with traditional high-performance computing increasingly looking for performance beyond the CPU, the quest for acceleration is heating up, as is the competition among vendors that offer acceleration products,” an IDC research manager notes.

Driving accelerated computing forward


At Dell EMC, the Extreme Scale Infrastructure (ESI) group is helping organizations catch the accelerated-computing wave with a new accelerator-optimized server designed specifically for machine learning applications and other demanding workloads that require the highest levels of computing performance.

This new 2-socket, 4U server, the Dell EMC DSS 8440 server, has 10 full-height PCIe slots in front, plus 6 half-height PCIe slots in back to create the right balance of accelerators, launching with 4, 8 or 10 NVIDIA® Tesla® V100 GPUs. It also incorporates extensive I/O options with up to 10 drives of local storage (NVMe and SAS/SATA) to provide increased performance for compute-intensive workloads, such as modeling, simulation and predictive analysis in scientific and engineering environments.

The new design enables accelerators, storage and interconnect on the same switch for maximum performance, while providing the capacity and thermals to accommodate future technologies. Offering efficient performance for common frameworks, the DSS 8440 server is ideal for machine learning training applications, reducing the time it takes to train machine learning models and time-to-insights. It allows organizations to easily scale acceleration and resources at the pace of their business demands.

The rise of a new machine


The DSS 8440 was developed in response to customer demand for even higher levels of acceleration than were previously offered by Dell EMC, according to Paul Steeves, a product manager for the new server.

As our customers push further ahead with machine learning solutions, it has become obvious that there was a need for increased amounts of accelerated raw horsepower,” Steeves says. “While accelerated servers exist from our competitors, many of our customers want open solutions, with choice not just now, but also over time as technology advances.

In addition, Dell EMC designed the DSS 8440 server specifically with machine learning training in mind, Steeves notes. For example, the system includes 10 high-performance local drives and extensive I/O options to deliver a more targeted solution for today’s growing number of machine learning workloads.

Key takeaways


◈ The DSS 8440 server offers extremely high levels of acceleration with up to 10 NVIDIA V100 GPUs in an open PCIe fabric architecture that allows other open-standard components to be easily added in future versions.

◈ The DSS 8440 server delivers the raw compute performance that HPC-driven organizations need today, coupled with the flexibility to adopt new machine learning technologies as they emerge.

Putting the system to work


The DSS 8440 server is designed for the challenges of the complex workloads involved in the process of training machine learning models, including those for image recognition, facial recognition and natural language translation.

“It is particularly effective for the training of image recognition and object-detection models, where it performs within a few percentage points of the leading numbers — but with a power efficiency premium,” Steeves notes.

Another strength of the DSS 8440 server is its ability to enable significant multi-tenant capabilities.

“With 10 full-height PCIe slots available, customers can assign machine learning or other compute-intensive tasks to several different instances within a single box,” Steeves says. “This allows them to readily distribute compute among departments or projects.”

The bottom line


As organizations move more deeply into machine learning, deep learning applications and other data- and compute-intensive workloads, they need the power of accelerators under the server hood. The new Dell EMC DSS 8440 server meets this need with a versatile balance of accelerators, high-speed I/O and local storage.

Sunday 21 July 2019

Bringing Simplicity to a Complex World – No Easy Task

Simplicity is a great virtue, but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.

I could not agree more with this quote from the Dutch essayist and pioneer in computing science, Edsger Dijkstra – famous for his works on algorithms from the ‘60s to the ‘80s. For the 30+ years I have been working in the IT industry, I have witnessed that with every new hype comes the promise of a complexity killer whereas, in fact, the new trend often creates more data silos to handle, at least for a transition period.

Dell EMC Study Material, Dell EMC Guides, Dell EMC Study Materials, Dell EMC Prep

The recent example is cloud computing, whose scalable pay-per-use model can bring real flexibility advantages to users, while also generating infrastructure chaos if there is no integrated multi-cloud management solution to bring consistency between private clouds, public clouds and on-premise datacenters. 93% of companies will use more than one cloud. They need a unifying partner to help them manage this complexity – connecting teams and processes across different platforms. Dell Technologies offers services, solutions and infrastructure to achieve consistency in a multi-cloud world and eliminate obstacles.

As a CFO, I consider it part of my mission to fight unnecessary complexity, whenever I can. I share this opinion by Jim Bell, a former CFO turned CEO, that complexity is the enemy of agility and that some level of automation (through selected RPA technologies, for instance) can help make things like planning and forecasting simpler in an age where companies are more and more data-driven.

Now, how do you take all the noise away and make sure you focus on tools and data that really bring some return on investment to the business?

1. I think the first milestone on the road to simplicity is to create and apply metrics that integrate user-friendliness when trying to calculate productivity gains yielded by a piece of software or an app. Dare to question (pilot) users on the time they need to make their way through the solution. How simple do they find it? Do they confirm the efficiency gains that the sales rep convinced you of? Do they see room for improvements that would make their lives much easier?

2. Secondly, when rolling out a new solution, set the right framework around the project. By ‘right’, I mean a steering committee, for instance, that has the authority to take (drastic) corrective action without delay. Concretely, make sure you have a good balance in that decision body between ‘subject matter experts’ and ‘outsiders’ so that you have different points of view on what is complex or not. In any case, you need mavericks that will challenge the projects on the simplicity/user-friendliness side. The profile of the ‘maverick’ will depend on the type of project. For instance, in a very process-driven accounting project, it is interesting to have someone with a creative personality to track the ease of use of the project, in combination with more system-driven types of person.

3. My third tip is to learn and share lessons from every IT project so that each project is a step forward on an improvement path towards greater efficiency. For instance, every year in January, I put ‘simplifying the complex’ on my list of priorities to discuss with the team, based on what we learnt from the past year.

4. Last but not least, I think fighting complexity often comes down to changing (bad) habits – we have always worked that way so it is probably the most efficient. I am convinced that simplicity starts with the right mindset – an ability to challenge things and be open to change. Why should we keep on with complex processes if there are simpler alternatives? It is a mindset that should be encouraged in the workplace, certainly towards newcomers that do not have a biased view yet.

In a recent podcast on the evolution of the CFO, McKinsey consultants refer to the finance function and the CFO as a talent factory which needs to flex different muscles to attract, retain and drive talent going forward. I am convinced that the ability to bring more clarity in things that tend to be messy is one of these key muscles.

Are you too? Do not hesitate to share comments or experiences on how you fight complexity in your work environment.

Saturday 20 July 2019

Next Frontier of Opportunity for OEMs: Data Protection

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Certifications, Dell EMC Online Exam
Roughly two terabytes of lost data costs organizations nearly $1 million in one year, on average.

Imagine if your email provider had a major outage and your last 48 hours of emails were lost. Or your recent radiology scans vanished because the imaging repository crashed at your hospital.

Startling statistics


Alarming episodes of data loss happen more often than we think, as the annual Global Data Protection Index (GDPI), a survey commissioned by Dell Technologies, reveals. The research, which surveyed 2,200 decision makers from organizations with 250+ employees across 18 countries and 11 industries, reports some startling statistics:

◈ 2.13 terabytes of lost data costs organizations $995,613 over the last 12 months, on average.

◈ Only 33% reported high confidence that their organization could fully recover and meet Service Level Objectives (SLOs) from data loss.

◈ In the last 12 months, 76 percent of organizations experienced a disruption, and 27 percent experienced irreparable data loss, nearly double the 14 percent in 2016.

Huge opportunity for OEMs


What does this mean for OEMs and application providers? Opportunity. Data protection differentiates your offerings from competitors with premium business value and shields your brand from risk. Further, it can generate an incremental revenue stream by making data protection as a service via the cloud available to your customers.

The real cost of data loss


OEMs and application owners traditionally have steered clear of data protection since they view data as their customers’ responsibility. This can be shortsighted, particularly as data value and cost of data loss grow exponentially. According to the GDPI, 74% of organizations are monetizing data or investing in tools to do so. The high costs of non-compliance with regulations, brand damage from data loss or cybersecurity attacks, and rapid expansion of data-driven decisions, are driving this trend. Also, 96% of organizations that suffered data loss and/or unplanned systems downtime experienced productivity decreases, inability to provide essential services, product/service development delays, and revenue loss, among other outcomes.

Integrated solutions now available


Another data protection concern for OEMs is complexity. The good news is it’s easier than ever to deliver data protection to your customers. No longer do you need to cobble together multiple backup and replication solutions. Dell Technologies, for example, provides integrated data protection solutions that enable seamless backup, restore, deduplication, and management with a few clicks via the cloud, virtualized, physical, or on-premise environments. Dell Technologies OEM | Embedded & Edge Solutions works with partners to co-develop enhanced data protection services, such as the Teradata Backup, Archive and Recovery solution.

Artificial Intelligence and Machine Learning


The GDPI reports that 51% of respondents cannot find suitable data protection solutions for artificial intelligence (AI) and machine learning (ML). Other emerging technologies customers struggle to protect include IoT and robotics, among others. Again, this presents a big opportunity to add value to your solutions. New use cases and workloads fueled by AI/ML present unique challenges to OEMs.

New data challenges


Enormous amounts of data, be it from on-premises analytics or edge sensors, are required for ongoing calculations. Previously this amount of historical data would be discarded or at best, archived. What’s more, these petabytes of data are a critical part of your IP and likely the source of future revenue opportunities.

Everyone agrees that production data is valuable and must be protected but how will you handle this new data challenge? As enterprise data points expand from data centers to automated oil rigs to robotics-driven factories to sensor-equipped retail stores and beyond, your customers require multi-pronged data protection solutions that work seamlessly with their applications and core, cloud, and edge environments.

Sizing the opportunity


To help assess this opportunity, we suggest downloading the GDPI for a detailed view, including global and regional infographics.

Making data protection a primary solution design consideration, just as you treat storage, servers, and networking, is a high-reward opportunity to offer differentiated value, create more revenue, and help your customers grow and succeed.

Thursday 18 July 2019

Not Just Another G: The Next Generation

Dell EMC Study Materials, Dell EMC Online Exam, Dell EMC Tutorial and Materials

The next-generation 5G architecture is built around the realization that different services are consumed differently, and by different types of users. Thus, next-generation mobile access technology must have:

1. A way to define those differences,

2. A way to determine and place constraints so as to meet those differences, and

3. A way to architect access methods that meet the goals of the different services that ride on top of the technology.

It is to this end that the 5G technology has built-in support for what’s called “network slicing” – a fancy phrase to say that the network is sliced up, with each slice configured to meet the needs of a singular class of service.

In the 5G architecture, for example, there is a slice designed to deliver common mobile consumer data. This slice delivers high throughput data consumers want access to, which may be things like pictures, videos, live video interactions, remote mailbox access or remote shared data vault access.

Another slice is designed for what is called “latency critical” applications. Imagine a connected, self-driving, auto-diagnosing car of the future. The car, connected to 5G, will be the “new cell phone”. It will automatically make things happen so that the driver can choose to not be in control and enjoy life or get work done while commuting. This requires a fast, high-speed, reliable, always-available and latency critical network. The 5G latency-aware slice allows a network design that can make these guarantees. By the way, the car is just one of the many such latency-critical applications.

Another slice of the network is designed to meet both the latency, and the capacity needs of the service. Consider the example of TeleHealth, a use-case where in a medical service provider is physically remote from the consumer. Many healthcare situations demand TeleHealth, which has seen only limited realization because a truly mobile, low-latency and capacity-aware network architecture has remained a challenge. All TeleHealth use-cases require:

1. Interaction with no frame/audio drops,

2. Atomic guarantees of delivery – if a command was sent, the network must guarantee the delivery of that command and the response back, and

3. Ubiquity – be a stranded climber on a remote mountain, or an inner-city youth who needs the help of a specialist in Mayo Clinic, the network must always be there to support the service.

This new and innovative world requires a large amount of infrastructure. It requires an increase in cell stations, to which a multitude of end-users will be connected to in order to consume services. It requires compute, storage, and networking capabilities distributed across the edge of the network, enabling a service delivery platform running both network services and 3rd party application workloads. This edge platform coupled with differentiated classes of service provides new ways for Telcos to monetize the infrastructure and charge consumers.

At Dell Technologies, we are focused on creating the best possible infrastructure elements that will help the creation of next-generation mobile access networks. Dell EMC servers are best-in-class and hold the biggest market share. Dell EMC storage is second-to-none, and offers all types and variations as needed to suit the goals of any point of presence in a 5G network. Dell EMC Networking gear brings it all together, in a self-aware, software-defined, declarative manner so that the network can adapt quickly to meet the demands of all the 5G slices.

Tuesday 16 July 2019

Data Protection Strategies for VxRail in a Multi-Cloud World

As customers are facing explosive data growth in their data centers – 163 ZB of data by 2025 – it has become imperative for businesses to protect and manage that data as well. However, data protection in the traditional data center can be inefficient, expensive, complex, and require multiple vendors. Dell EMC’s data protection solutions can simplify these complexities through integration, scalability, and automation to empower data owners with the necessary tools to meet the needs of fast-growing organizations of any size.

Colin Durocher, Product Manager from Dell EMC, and KJ Bedard, VxRail Technical Marketing Manager from Dell EMC, recently spoke about VxRail leading the HCI market by combining best-of-breed technologies to simplify the path towards the VMware cloud.

Powered by VMware vSAN, VxRail transforms IT faster. VxRail consolidates compute, storage and virtualization with end-to-end lifecycle management. VxRail customers see a 52% reduction in time spent on infrastructure deployment tasks. VxRail includes a full suite of software including data protection and recovery provided by snapshots and stretched cluster technology, ensuring data efficiency services that are ready for any virtualized workload. VxRail’s resilient architecture protects the integrity of virtual machines (VMs) as well as the individual profiles for each VM.

Dell EMC, Dell EMC Storage, EMC VMware, Data Protection, Dell EMC Study Materials, Dell EMC Tutorials and Materials

With VxRail, Dell EMC can meet any RPO/RTO service level objective and covers the widest ecosystem of applications and environments. Specifically, the VxRail data protection deployment schema has six data protection solutions in private and hybrid cloud disaster recovery (DR).

The three private cloud data protection solutions satisfy Tier-0, Tier-1 and local data protection.

1. VxRail Active – active stretch clustering for Tier-0 data protection. This solution provides continuous availability with local clusters for site level protection with zero data loss and near instantaneous recovery, as well as automated failover in case of site failures. Stretched clusters can also support a minimum of 3 nodes per site (local and remote) and a maximum of 15 nodes per site; 15 on local and 15 on remote. Benefits include using stretched clustering for disaster avoidance, planned maintenance activities and zero RPO. Upgrades are not customer driven and require contacting support. With site failure, vSAN maintains availability with local redundancy in surviving site, while requiring no change in stretched cluster configuration steps.

2. Tier-1 DR between sites, powered by RecoverPoint for Virtual Machines (RP4VMs) –  Architecturally, RP4VMs provide no single point of failure, sync, async, or dynamic replication. This enables data recovery to any point-in-time (PiT), locally or remotely. RP4VMs can be used in a multitude of use cases such as operational recovery in case of data damage, migration and data protection, automated disaster recovery, and data reuse.

3. Local data protection – Modern data centers need next generation data management software. Customers can extend Dell EMC on-premises with the Dell EMC PowerProtect X400 appliance and software, powered by Data Domain’s deduplication technology.

Dell EMC, Dell EMC Storage, EMC VMware, Data Protection, Dell EMC Study Materials, Dell EMC Tutorials and Materials

The Dell EMC PowerProtect X400 particularly makes sense for the VxRail because it has the same hardware, making it a fit for HCI. The hardware is optimized for data protection instead of having a separate VxRail cluster for data protection, and it can scale-out like a VxRail.

PowerProtect software is also available for VMware Cloud™ on AWS and enables integration with on-premises data protection that simplifies administration, offers best-in-class deduplication and seamless integration, and simplifies management with automated operations.

Dell EMC, Dell EMC Storage, EMC VMware, Data Protection, Dell EMC Study Materials, Dell EMC Tutorials and Materials

The three hybrid cloud DR solutions satisfy Tier-2 and beyond:

1. Tier-2 DR or VMware site recovery for VMware Cloud™ on AWS combines the power of Site Recovery Manager with vSphere replication and the elasticity and simplicity of the cloud. This solution provides a 1-click DR automation for low RTOs, as well as a hypervisor based VM replication, which copies snapshots to the remote site. VMware site recovery for VMware Cloud™ on AWS is equipped with ready-to-go infrastructure in the cloud and offers DR as a Service (DRaaS).

2. RP4VMs replicate the data to AWS S3 storage, offering customers a low-cost DR site. Cloud DR recovers to VMware Cloud™ on AWS on-demand, provides consistency and a familiar UI for vAdmins. Additionally, RP4VMs can be combined with local and remote snapshots for any-PiT protection.

3. Tier-3 DR or PowerProtect Cloud DR provides orchestrated DR, which enables test and failover to VMware Cloud™ on AWS, vMotion back to an on-premises location, and multiple VMs DR plans. PowerProtect Cloud DR also has a minimal cloud cost and footprint, protecting data directly to AWS S3 storage and eliminating DR data center costs. This ensures that your virtualized environment is simple to operate by utilizing the existing on-premises UI, direct in-cloud access, and fully automated failover. Cloud DR Standard mode supports the option to recover directly from the VMs copies stored in S3 directly to VMware Cloud™ on AWS.

Dell EMC, Dell EMC Storage, EMC VMware, Data Protection, Dell EMC Study Materials, Dell EMC Tutorials and Materials

These data protection approaches provide customers with the performance and simplicity that they need to address the operational and compliance requirements for the modern SDDC. Business needs combined with financial constraints drive the right solution for any given application. Customers with variety of applications may elect to have a variety of solutions, but PowerProtect covers most of the space. With VxRail and VMware, Dell EMC offers the full spectrum of data protection solutions for any environment. This is just another example of how Dell EMC and VMware are better together.

Saturday 13 July 2019

You’re a Technology Company. Now What?

One of the greatest things about product releases and the excitement they draw, is the opportunity to sit down and talk with customers. Yes, customers want to talk to us about what’s new, but more importantly, they want to share incredible stories about their journeys. As you listen to these stories, it becomes crystal clear the “technology knife fights” vendors get into pale in comparison to the economics and business realities these businesses face. Let’s unpack what this means.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Online Exam

Digital Transformation – it’s not just a buzzword, it’s the new reality. Every organization in every industry is taking big strides to apply technology to their business. The truth is, it’s not even a choice anymore; it’s a key vehicle to gaining competitive advantage, and in many cases, a mode of survival. I remember a few years ago when Walmart, on a financial analyst call stated, “we are a technology company”. Yes, they know retail better than most anyone, but their future success was dependent on transitioning to this digital age. Data is now one of the most critical and valuable assets organizations can own, but only if it is leveraged to drive operations and delivers insights.

Digital transformation presents an interesting dilemma for businesses – they have substantial technology investment decisions to make, and concerns about their ability to consume and onboard that investment in technology in a manner that isn’t overly disruptive. Cloud becomes one of the most obvious solutions organizations look to – consume only what you need, scale up to meet growing demands, and put the power in the hands of employees – what an incredible promise. But there are organizational realities that must be accounted for, especially when injecting technology like cloud.

Skillsets and Process Mismatches


Most organizations have already made substantial technology investments and as a result, resources have been hired with the skillsets tied to those investments. New technology and sweeping changes require new investments in reskilling and can be bad for employee satisfaction, morale, and retention. Additionally, existing processes likely add tremendous value and are often the product of extensive research and applied learnings over the years. Will you miss these processes if they are removed? What should they be replaced with? If these questions are not answered for both your skillsets and processes, then the result is often the injection of a lot of risk into the organization, or costly remediation efforts.

Talent Scarcity


There is vast competition for technical talent. These resources are being bombarded with tantalizing offers to join the big brand tech organizations and up-and-coming startups. Your business could potentially face an uphill battle securing the best and brightest. It could necessitate the need to pay a premium for some tech roles. This can result in not having the right skillset in house to make use of all these new technologies.

Putting Innovation on Hold


The whole point of digitally transforming is to apply technology to the business and leverage data as a competitive advantage. The longer it takes to digest the infusion of technology into the business, and apply it to business activities means the longer it takes to see the fruits of your digital transformation. The larger the scale of the change, the longer it takes if your IT staff get bogged down in a protracted migration or re-platforming effort that can have a material impact on a quarter or fiscal year.

Elevating Your Cloud Strategy


It is essential for your organization to land on a pragmatic cloud strategy that applies a filter that looks at your people, process, and objectives while selecting the technologies to onboard and to what degree. It’s not so much what “operational nirvana” looks like. It’s a matter of what can feasibly be accomplished.

1. Take a look at your existing processes and see what can be leveraged and ported to the cloud to lower the friction of adoption and lower the complexity of the shift to a new operating environment.

2. Don’t get sucked into bleeding edge technologies and the promises they might hold if there isn’t a corresponding way to incorporate them into your business. Search for solutions that align with your organizations’ resources or for widely adopted technologies that offer a wide selection of resources.

3. Cloud offers a lot of great things, but you shouldn’t let cloud exuberance get in the way of an orderly transition to this new environment. Applications should be vetted, strategies should be clear, and early wins should be established before attempting massive overhauls.

It is important to keep your options open by deploying in hybrid environments that offer portability and reduce the management overhead of maintaining two or more clouds. Many CIOs we’ve talked to indicate that even when they make a substantial investment in the cloud, they have maintained facilities to mitigate the risk should a cloud exodus be required later.

By applying a methodology that accounts for business imperatives and sustainability in your technology selections and investments, you can avoid the pitfall of over pivoting in your desire to put a compelling technology in place. At the end of the day, technology needs to serve the business need, even if you’re now a technology company.

Thursday 11 July 2019

Data Protection in a Multi-Cloud World

As organizations move along the path of digital transformation, enterprise cloud usage continues to evolve as well. IDC has predicted that by 2020, over 90% of enterprises will use multiple cloud services and platforms. Over time, workloads will become more dynamic and applications will span clouds or move between them for resiliency, performance and cost considerations.

The public cloud allows businesses to be much more efficient in consuming technology, paying only for what they need. In addition, public cloud service providers are constantly innovating around data management, artificial intelligence and machine learning. A modern hybrid strategy provides access to such innovations and allows businesses to exploit the best that each service provider has to offer.

With multi-cloud environments becoming the norm, organizations will have to face the complex task of pulling together a seamlessly integrated data protection strategy that will merge disparate cloud services and automate movement of data across their cloud ecosystems. Leading organizations worldwide use Dell EMC’s Data Protection Solutions to simplify, accelerate, and scale their backup and recovery environments.

Dell EMC Data Protection solutions support native backup tiering to public or private clouds for cost-effective storage of long-term backup retention data, eliminating the need for physical tape infrastructure. For customers looking to extend their data protection to the cloud, we provide solutions for leading cloud providers, including AWS, Azure, Google, and Alibaba. Our continued innovation has enhanced our hybrid and multi-cloud data protection capabilities and helps customers reduce the risk of data loss, decrease CAPEX and OPEX costs, and improve their operational management efficiencies.

Data Protection Solutions for Multi-Cloud


Long-Term Retention


When it comes to long-term retention in multi-cloud environments, the Data Domain (DD) Family has been the number-one choice when it comes to the Purpose Built Backup Appliance market. Both Data Domain and our Integrated Data Protection Appliances (IDPA) provide best-in-class deduplication and scale for your data protection needs, including the ability to tier data to the cloud for long-term retention. With Dell EMC Cloud Tier, you can send data directly from the DD appliance to any of the validated and supported cloud object storage providers – public, private or hybrid – for long-term retention needs. For example, Dell EMC helped Baker Tilly clients with 24×7 access to vital financial data.


Dell EMC Cloud Tier can scale up to 2x the max capacity of the active tier, increasing the overall DD system scalability by up to 3x. For example, the DD9800 scales up to 1PB of usable capacity on the active tier; therefore, the cloud tier can support up to 2PB of usable capacity.

Factoring in DD deduplication ratios, this results in up to 100PB of logical capacity being efficiently protected in the cloud and overall 150PB of logical capacity being managed by a single DD system. Purchase an IDPA DP4400, and you will receive 5TB of Dell EMC Cloud Tier for free.

Similarly, our new PowerProtect Software, software-defined and multi-cloud optimized for long-term retention, offers efficient data management capabilities across your ever-changing IT environment, leveraging the latest evolution of Dell EMC trusted protection storage architecture. With operational simplicity, agility and flexibility at its core, PowerProtect Software enables the protection, management and recovery of data at scale in on-premises, virtualized and cloud deployments. Self-service capabilities drive operational efficiency, and IT governance controls ensure compliance, making even the strictest service-level objectives easy to meet.

Cloud Disaster Recovery


For modern disaster recovery, Dell EMC offers Cloud Disaster Recovery (CDR) to copy backed-up VMs to the public cloud. CDR has been recognized for application-consistent cloud disaster recovery in AWS or Azure, as well as recovery to VMware Cloud™ on AWS. CDR saves CAPEX costs by reducing the need to build additional data centers for disaster recovery and with these new enhancements further improves the viability of public cloud disaster recovery options. This Cloud Disaster Recovery solution also provides 5TB CDR free with the purchase of IDPA DP4400.

Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Exam, Dell EMC Cloud

Dell EMC modern management simplifies the backup and recovery of VMware images across your data protection environment with an enhanced HTML 5 UI.

In-Cloud Data Protection


Dell EMC Cloud Snapshot Manager, a SaaS offering for AWS and Azure, breaks cloud silos as a multi-cloud solution, making it easier for customers to organize and protect public cloud workloads. Customers can quickly discover, orchestrate and automate the protection of workloads across multiple clouds and regions based on policies for seamless backup and disaster recovery, using one SaaS tool that does not require installation or infrastructure. Take advantage of our 30-day free Cloud Snapshot Manager trial.

Data Protection for Service Providers


VMware and Dell EMC Data Protection have greatly enhanced their integration, making it easier for service providers to jointly deliver VMware and Backup as a Service. Now cloud service providers with multi-tenant VMware environments can offer their customers robust, integrated data protection with a best-in-class user experience through vCloud Director or Backup as a Service. Service providers and their customers can benefit from the proven low operating cost, high scalability and performance of Dell EMC Data Protection, and purchase directly through VMware Cloud Provider Program (VCPP), paying in arrears.

It has never been easier for you to extend your customer’s infrastructure to the cloud with simplicity, automation and scale – at a fraction of the cost – making you a trusted partner in backup and recovery.

Integration with VMware


Today, most workloads run on VMware virtual machines (VMs). Protecting these environments can get complicated as the amount of data, applications, and VMs continues to increase. As users adopt cloud technologies, the movement of both data centers and the data protection environment further complicates matters because organizations must deal with siloed data, as well as multiple solutions and vendors.

To accelerate our customers’ IT transformation and enable data protection for VMware and cloud environments, Dell EMC and VMware together offer easy, secure and cost-effective solutions with deep integration points. As more users adopt multi-cloud environments, Dell EMC’s deep integration with VMware’s user interface becomes more and more important, providing best user experience for VMware users on premises or in the cloud.

Tuesday 9 July 2019

Accelerate HPC workloads with SAGA – Find out how with Dell EMC Isilon and Altair

An ongoing challenge with HPC workloads is that as the number of concurrent jobs increases, storage reaches a critical point where NFS latency spikes, and beyond that critical point, all workloads are running on that storage crawl. An integration of Dell EMC Isilon scale-out storage with Altair Accelerator enables Storage-Aware Grid Acceleration (SAGA), an elegant and innovative solution that can address your next wave of design challenges.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning
As the number of concurrent jobs in HPC workloads increases, storage latency spikes and workloads start to crawl.

Let us consider a scenario in which you have 10,000 cores in your compute grid and each of your jobs runs 30 minutes, so if you submit 10,000 jobs to the job scheduler, it should be finished in 30 minutes with no jobs waiting in queue. With time, your test cases have grown to 20,000 jobs, and with 10,000 cores that set finishes in 60 minutes. The business need is that you want those 20,000 jobs to finish in 30 minutes, so you add 10,000 more cores. But now, the job doesn’t finish even in 2+ hours because storage latency has spiked from 3ms to 10ms. Latency has x^2 impact on run time, so doubling latency quadruples your average run time.

Let’s now look at another scenario with more I/O-intensive jobs, so just 5,000 concurrent jobs push the NFS latency to that critical point. By adding only 50 more jobs, you would spike the latency to 2x the normal value. And this latency spike doesn’t just affect the additional 50 jobs but the entire 5,050 jobs on the compute grid. Beyond that critical point, there is no value running I/O-intensive jobs on the grid.

In a scale-out Dell EMC Isilon Network Attached Storage architecture you can add more storage nodes and push the critical point to the right so that you can run more concurrent jobs on the compute grid. Remember that workloads are unpredictable, and their I/O profiles can change without much notice.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning
Storage latency greatly impacts runtime of a job, which in turn impacts time to market.

One of the key pieces of the Electronic Design Automation (EDA) infrastructure — or any HPC infrastructure — is a job scheduler that dispatches various workloads to the compute grid. Historically, the workload requirements that are passed on to the job scheduler have been cores, memory, tools, licenses and CPU affinity. What if we add storage as a workload requirement — NFS latency, IOPS and disk usage? Now the job scheduler managing the compute grid is aware of the underlying storage system and can manage job scheduling based on each job’s storage needs, thus accelerating grid throughput by distributing jobs appropriately. Storage is now a resource just like cores, memory, and tools consumed by the workload based on its priority, fair share and limits.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning
Unmanaged I/O-Intensive jobs cause a dramatic increase in latency.

This simple idea has huge implications on job throughput in the EDA world. As you already know, job throughput impacts design quality and reliability, which in turn impacts tape-outs and ultimately time to market. EDA workloads are massively parallel and as you increase the number of parallel jobs, you put more pressure on the underlying storage system, as it should, but this impact on storage is much more drastic on legacy scale-up storage architectures compared to Isilon, a scale-out storage system. Read more about the benefits of an Isilon scale-out NAS architecture in this white paper.

Storage-Aware Grid Acceleration with Isilon and Altair Accelerator™


With SAGA, you’re throttling and/or distributing jobs that are I/O-intensive as latency spikes beyond a configured value, and now you’re not running 20,000 concurrent jobs but enough so that your jobs finish in 30–45 minutes instead of 4 hours. In addition to 100% throughput gains, you also have substantial indirect cost savings because you’re using 50% fewer licenses and cores. In this example, the numbers are skewed to simplify calculations, but the impact and benefits are similar in the real world.

In the example below, an unmanaged workload of 500 I/O-intensive jobs ran in around 3 minutes on 500 CPUs. When Altair Accelerator was implemented to manage the workload, it ran in the same 3 minutes on only 10 processors — using around 50x fewer resources.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning

SAGA lets you run your workload with up to 80x less compute resources.

Hot directory detection


Altair Accelerator and Isilon also work together to ensure that filer temperature doesn’t get too hot and compromise performance. Isilon provides feedback to Accelerator, and if an I/O-intensive job needs to be preempted, Accelerator will suspend it.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning

SAGA lets you identify I/O-intensive jobs and responds by preempting jobs — only those in the hot directory.


Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorials and Materials, Dell EMC Learning
SAGA distributes jobs based on I/O resources and pre-empts I/O intensive jobs in order to maximize job throughput.

Storage is a critical resource


Like cores and memory, storage must be a resource in your grid system, and having a true scale-out storage system like Isilon with an extensive API stack is very valuable. Its integration into Altair’s enterprise-grade Accelerator job scheduler is key to solving the next set of design challenges.

Saturday 6 July 2019

Dell EMC vs AWS—USD Reveals True Economics

University of San Diego (USD) is one of the top Catholic universities in the nation, recognized for our academic excellence and development of leaders who become change-makers in a variety of fields.

We’re known for the exceptional personal classroom experience we offer students, with world-class professors teaching undergraduate class sizes that are limited to 30 students. But we also leverage technology to deepen and extend our rich academic and extracurricular offerings—such as delivering more than 1,100 online and hybrid courses and “flipping the classroom” to allow professors to share lectures electronically and use class time to engage students in discussions and answer questions.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Tutorials and Materials

Ensuring that our students and faculty have all the solutions needed to excel isn’t easy and we’ve invested in many different advanced technologies to accomplish this. We thought it would be relatively simple to bring these together and support our own virtual IT infrastructure. Unfortunately, trying to do it ourselves proved to be more costly and time-consuming than anticipated.

In 2011, we turned to Dell EMC Vblock® Systems to become the core of our compute and storage, running hundreds of educational and business applications. And now, four years later, we’re continuing to modernize our IT infrastructure with a new VxBlock™ System 350 to support our four campus data centers. Plus, we rely on a hyper-converged Dell EMC VxRail™ Appliance at our disaster avoidance site in Phoenix, Arizona.

To determine our infrastructure refresh strategy, we performed extensive analyses to compare the cost and efficiency of using Amazon Web Services (AWS) versus our Dell EMC-supported private cloud infrastructure. And what we discovered made choosing to update our Dell EMC footprint a no-brainer.

Dell EMC was significantly less expensive than AWS. Here are some of the details:

Lower cost

◈ We spent about $900,000 on our first Vblock System over four years

◈ Dividing by 48 months, that works out to approximately $19,000/month, or $30/hour

◈ This is the equivalent of about 100 AWS EC2 instances with up to 100GB of attached storage, and that doesn’t include the cost of monthly support from AWS.

Higher capacity

◈ Based on these figures, we calculated that we could run 4x the number of solutions in our VxBlock System-supported private cloud for the same cost as the AWS public cloud

Faster access

◈ Plus, by utilizing an on-site Dell EMC solution, we can connect the VxBlock System to our other three campus data centers at 10 Gbps and provide on-campus access to users at a head-turning 1 Gbps—compared to AWS’s Internet-speed connections

It’s true that I could purchase AWS EC2 Reserved Instances for one or more years and reduce my public cloud costs a bit, but AWS still wouldn’t be cheaper than Dell EMC!

At USD, users rely on the IT infrastructure to be continuously available so they can use the latest technologies without thinking about it. Our goal is to achieve a mindset of, “We’re in the higher education business—we just happen to use IT to deliver services.”

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Tutorials and Materials

Since we implemented our first Vblock System, our IT Services team has been able to focus on innovation and adding value, rather than on simply keeping the lights on. A key component of this has been the converged and hyper-converged support, which enables us to get assistance anytime we need it with a single phone call—without the finger-pointing one experiences when dealing with multiple vendors.

I was just talking with some colleagues about our plans to roll out our new VxBlock System, and it’s nice to be able to say I’m investing in my second Dell EMC solution because I’ve determined it’s one-quarter of the cost of AWS.

Thursday 4 July 2019

Growing the Dell EMC partnership with NVIDIA

Dell EMC announces resell of  the NVIDIA DGX-1 deep learning server available to organizations around the world.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials

For years, Dell EMC has worked to bring the power of GPU-accelerated computing to our high-performance computing customers. This is a quest that continues today, with more momentum than ever before.

During ISC 2019, Dell EMC announced a portfolio of AI systems which now includes the NVIDIA DGX-1, offering high performance for the most challenging AI workloads. You can now find NVIDIA GPUs inside several Dell EMC servers and Ready Solutions, including the new Dell EMC DSS 8440 server with up to 10 GPU accelerators, and the Ready Solutions for AI, Deep Learning with NVIDIA offering in our growing portfolio of Dell EMC Ready Solutions.

Today, we’re taking things a step further with an expanded partnership with NVIDIA. Customers have asked for NVIDIA DGX-1 to complement the existing Dell EMC PowerEdge line and it’s now available worldwide from Dell EMC. Through this partnership, Dell EMC and NVIDIA will make the power of GPU-accelerated computing conveniently available to organizations just about everywhere.

For those homing in on NVIDIA GPU-accelerated deep learning systems to stay competitive, this announcement should be cause for excitement. The NVIDIA DGX-1 is a single server with the all-NVIDIA software stack, designed for deep learning performance. It’s architected for high throughput and high interconnect bandwidth to maximize neural network training performance.

Dell EMC can help you get the most out of NVIDIA DGX-1 by pairing the system with PowerEdge servers, PowerSwitch networking, and/or storage including Dell EMC Isilon F800 scale-out all-flash NAS. With the ability to support massive concurrency, scale from 10s TBs to 10s PBs of data and linearly scale bandwidth performance up to 945 GB/s, the Isilon F800 is a perfect data complement to the high performance, high bandwidth NVIDIA DGX-1 system. It’s built for the challenges of AI applications, like deep learning, which are some of the most demanding compute and data-intensive workloads found in today’s data centers.

These new AI options allow Dell EMC customers to quickly deploy production-grade deep learning solutions for really tough use cases — from applications like genomics and precision medicine to autonomous driving and advanced driver assistance systems. Systems like these are now essential for organizations that need GPU-based AI systems into production today — not years from now.

And there are a great many organizations that fall into that category. The number of enterprises implementing AI grew by 270 percent in the past four years and tripled in the past year, according to the Gartner, Inc., 2019 CIO Survey. The firm found that 37 percent of organizations have already implemented AI in some form.

Clearly, AI continues to gain momentum. To keep it going, and to open the AI doors to more enterprises, we need turnkey AI platforms that make it faster and easier to adopt applications for deep learning and other technologies for AI. At Dell EMC, we are working with NVIDIA and other world-class technology companies to bring those platforms to market, all around the world.

Tuesday 2 July 2019

The New TOP500 List Debuts: The Envelope, Please…

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Guides, Dell EMC Exam

The June 2019 TOP500 update released at the International Supercomputing Conference in Frankfurt includes multiple Dell EMC supercomputing clusters.


High-performance computing is always a bit of a global competition, with nations and research-oriented institutions pitted against one another in a race to see who has the fastest and most amazing supercomputers — and the respect that comes with operating a leading-edge system. That makes sense, because a blazingly fast supercomputer is an indicator of support for world-class research and a commitment to scientific discovery.

While this is a friendly competition, it is a competition nonetheless, and one that is fun to celebrate with each update to the TOP500 and I/O500 lists, which rank the speeds of computers, as well as the Green500 list, which ranks supercomputers based on energy efficiency.

At Dell EMC, we always get excited to see our customers’ names on these lists. That’s the case once again, with release of the new the TOP500 list. Among the Dell EMC systems that made the list are three new supercomputers that debuted in 2018 or 2019. Our hats off to these organizations, as well as to all of our HPC customers whose systems appear on the TOP500 list.

Texas Advanced Computing Center


The Frontera supercomputer from the Texas Advanced Computing Center (TACC) at the University of Texas was ranked at No. 5 on the new TOP500 list. Frontera leverages Dell EMC PowerEdge C6420 servers and Dell EMC Isilon unstructured data storage solutions in combination with 2nd-generation Intel® Xeon® Scalable Platinum processors, Intel® Optane™ DC Persistent Memory, CoolIT Systems high-density Direct Contact Liquid Cooling and high-performance Mellanox HDR 200Gb/s InfiniBand interconnect. The system has a total of 448,448 cores.

Frontera — the Spanish word for “frontier” — will fuel important advances in all fields of science, from astrophysics to zoology. The system, built with support from the National Science Foundation, will arm researchers from around the country with the HPC resources they need to run demanding workloads like analyses of particle collisions from the Large Hadron Collider, global climate modeling, improved hurricane forecasting and multi-messenger astronomy.

Mississippi State University


The Orion supercomputer at Mississippi State University was ranked at No. 62 on the TOP500 list. Orion is based on Dell EMC PowerEdge C6420 servers, Intel Xeon Gold processors and InfiniBand HDR. With 67,240 cores, the system will provide researchers with the additional HPC capacity they need to run larger, more complex, and more detailed simulations and models. Orion will support advanced research and development activities in a broad range of areas, including environmental modeling, cyber security, and autonomous vehicle design and operation.

Simon Fraser University/Compute Canada


The Cedar-2 supercomputer from Simon Fraser University/Compute Canada came in at No. 256 on the TOP500 list. Cedar-2 is built with Dell EMC PowerEdge C6320/C6420 servers, Intel Xeon Platinum processors and an Intel® Omni-Path Architecture (Intel OPA®) interconnect. The system has 55,296 cores.

And a little closer to home…


And, of course, at Dell EMC we are pleased to see one of our in-house HPC clusters in the mix with these leading-edge systems. Our Zenith supercomputer, from our HPC and AI Innovation Lab, was ranked at No. 383 on the new TOP500 list. Zenith is based on Dell EMC PowerEdge C6420/C6320p servers, Intel Xeon and Intel Xeon Phi processors, and an Intel OPA interconnect.