Monday, 30 September 2019

Introducing 3rd Party Software Support

Let’s face it, supporting your critical data center infrastructure is hard to do. Knowing which vendor to contact for which problem, remembering all the contact details, websites and more can be confusing and flat-out frustrating.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorials and Materials, Dell EMC Certifications, Dell EMC Online Exam
It’s true, most vendors say they will provide “multivendor” support – but the reality is that ultimately you are often the one responsible for engaging with the 3rd party vendor and closing the case.

At Dell EMC, we get this and that’s why we have created a new feature called 3rd Party Software Support, which is included with your ProSupport Plus for Enterprise support agreement.

With 3rd Party Software Support, we will support any eligible software installed on your Dell EMC system, and the best part is, we will support it whether you purchased the software from us or not.  Not only will we diagnose the issue, we will own the issue through resolution. This includes software titles from Microsoft®, Red Hat® and VMware®.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorials and Materials, Dell EMC Certifications, Dell EMC Online Exam
The bottom-line is this: whether you purchase your software from Dell EMC or you already have the software and want to use it on our technology, with ProSupport Plus for Enterprise, if you have a support issue, just call us. We streamline support through our technology experts who own your case from first call to resolution.

At Dell EMC, we are listening and working hard to make your support experience even more proactive, preventive, personalized and most of all… simple.

Saturday, 28 September 2019

Should You Buy a Name-Brand or a Commodity Server? Research Reveals the Answer

Do you think a server is a commodity? Frost & Sullivan found that most savvy businesses don’t think a server is a commodity. In fact, organizations with strategic business objectives place greater value on server characteristics that directly impact these outcomes. Buying a server is not like buying corn.

Commodity brands are characterized by high-volume, low-price strategies. Name brands are characterized by a broad portfolio of products and services, and offer products vetted with testing and validation. Buyers of name-brand servers tend to be laser focused on business-related outcomes. They place a greater priority on factors like security and reliable availability. Buyers of commodity-brand servers are less concerned with strategic business benefits and more focused on purely with maximizing how far their dollars go. For example:

◒ Buyers of name brand servers value protection against hardware and firmware compromise more than twice as much as those who purchase commodity brands.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Online Exam

◒ Name brand buyers don’t want to have to reboot their servers, but when they do have to reboot, they want it to happen quickly (52%). Commodity buyers care less about this (39%).

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Online Exam

If you prize key business objectives, you will most likely gravitate toward the high performance and high functionality of a global name brand. And there are even differences among global brands. Dell EMC outranked commodity brands on almost all criteria. How did we measure this? On the Frost & Sullivan survey, survey participants were asked to indicate their preferred brand and rank how it stacked up to the competition. Categories ranged from reliable availability to information security assurance to scalable designs.

Global name-brand players are where it’s at when it comes to crucial features like security. Dell EMC servers offer protection from threats ranging from malware injections to data breaches. Commodity brands simply don’t.

Just as you might look for cutting-edge features when buying a new car, you might be looking for cutting-edge features in a server. Those features probably don’t exist in a commodity server. And if they do, they probably haven’t been tested.

Read the complete Frost & Sullivan report here.

Friday, 27 September 2019

Dell EMC Continues SC Series Investment with 2nd Major Update in 12 Months

When it comes to storage innovation, our policy is to never rest on past accomplishments, even when things are going spectacularly well for our customers.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Study Materials, Dell EMC Certifications

That’s why I’m so pleased to announce the launch of SCOS 7.4, the newest firmware update for SC Series arrays – and a worthy successor to last year’s acclaimed 7.3 release. 7.4 makes an already full-featured platform even more robust with new usability, ecosystem and workload performance advantages.

Welcome to the next stage of SC innovation


SCOS 7.4 and its accompanying management update, DSM 19.1, are available today as a no-charge, non-disruptive upgrade for customers with current support contracts. New capabilities include:

◈ Easier resource prioritization across SC Series federations – Replication QoS policy “cloning” minimizes error and saves hours of configuration in large deployments, while giving you full control over bandwidth utilization between arrays during replication or when using our popular Live Migrate or Live Volume features.

◈ Improved security administration – Enhanced tools for managing SSL certificates and LDAP groups save even more time and cost, enabling larger secure admin environments, and reducing the likelihood and impact of data breaches.

◈ Key OS and app integrations – New software- and system-level support leverages Dell EMC’s deep partnerships with Microsoft, VMware and others to ensure SC solutions complement and enrich the ecosystems customers depend on.

◈ Enhanced “out of box” experience – Now faster than ever, web-based setup lets you execute an entire SC array installation in minutes from your mobile device, expanding on management capabilities enabled previously by both Unisphere for SC and CloudIQ.

Building on last year’s improvements


SCOS 7.4 also provides a significant performance boost, above and beyond the large gains posted with 7.3. Application-specific test results show up to a 43% increase in SQL transactions per second and a 31% increase in low-latency VMs supported for VDI.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Study Materials, Dell EMC Certifications
Every array in the SC lineup gets more speed with SCOS 7.4, but SC7020 gets an extra helping, thanks to more efficient utilization of that model’s dual processor architecture. On hybrid or all-flash SC7020 arrays, the new firmware provides a 40% increase in mixed workload IOPS, confirming SC7020’s powerhouse status near the upper end of the SC portfolio. Our large install base of legacy SC8000 customers now has multiple high-performance, high-capacity upgrade paths, with SC9000, SC7020 and SC7020F all providing excellent options for a tech refresh.

Smart choice for a changing technology landscape


We know you have a lot on your mind. With application development, software upgrades and a dozen other things competing for your limited time and IT budget, every dollar you spend must pay off today and tomorrow, without requiring you to step back with frequent re-plans.

That’s why we pack so much headroom into our SC solutions. Whether it’s extra performance to accommodate unforeseen workloads, intelligent federation that rebalances your environment as it evolves, or a host of other forward-thinking capabilities, we’re building in flexibility to take you wherever you need to go – at a cost that won’t erode long-term ROI. Future-proof design is in our DNA. It’s what keeps customers coming back for solutions that outlast multiple product cycles elsewhere in the datacenter.

Other storage providers may wait to hear, “what have you done for me lately?” — but at Dell EMC, we prefer to answer the question before it’s asked. SCOS 7.4 is just the latest proof of our tireless commitment to enable ongoing workload and business success for our SC customers.

Thursday, 26 September 2019

Introducing Dell EMC PowerProtect DD Series Appliances, the Next Generation of Data Domain, Setting a New Bar for Data Protection in a Modern Digital Economy

It’s not often that a product has the longevity to successfully span decades in any market. Then again, Data Domain was never just any product. Data Domain spearheaded one of the most disruptive technology shifts in IT by leading backup from tape to disk-based systems. And along the way, it helped create the Purpose Built Backup Appliance market. Since IDC has been tracking this category, Dell EMC has been the revenue leader. Over this time Data Domain has consistently set the bar for innovation and customer value and today, is recognized as the industry’s most scalable, reliable, cloud-enabled backup appliance. Today we begin the next chapter with the introduction of the next generation of Data Domain; PowerProtect DD Series Appliances.

The announcement of PowerProtect DD Series Appliances coupled with several other new data protection enhancements positions the Dell EMC data protection portfolio for continued leadership. Our customers can be confident that their data will be protected from edge to data center to cloud, now and into the future.

PowerProtect DD – The Ultimate Protection Storage Appliance


Continuing a legacy of never-ending innovation, PowerProtect DD Series Appliances deliver next generation capabilities to help organizations transform data into value.

◈ Fast, Efficient, Secure
◈ Industry Leading Multi-Cloud Data Protection
◈ Expands Multi-Dimensional Appliance Portfolio

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Online Exam, Dell EMC Learning, Dell EMC Guide

The PowerProtect DD Series Appliances include three new physical appliances, including the PowerProtect DD9900, DD9400 and DD6900, as well as the existing DD3300. It also includes a software-defined appliance in the PowerProtect DD Virtual Edition.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Online Exam, Dell EMC Learning, Dell EMC Guide

Fast


Performance is paramount in a world of relentless data growth, not only to protect data quickly, but also to accelerate recovery in order to ensure availability. PowerProtect DD will deliver up to 38% faster backups and 36% faster restores. Time is everything when recovery is on the line and PowerProtect DD continues to slash the time required to get back up and running with faster instant access and restore. Up to 60K IOPS supporting as many as 64 concurrent virtual machines represents a 50% increase over what was previously available. These capabilities not only improve recovery times but also help organizations drive data reuse, whether it be for analytics, development, test and more. To deliver faster networking compatibility the new appliances also support 25GbE and 100GbE.

Efficient


Doing more with less has become the new normal. In order to drive improved TCO gains, PowerProtect DD delivers new levels of storage efficiency. PowerProtect DD9900 can support 25% more capacity with 1.25PB usable storage. Efficiency improvements drive up to 62.5% more effective capacity in a single rack, a remarkable 81.3PB.

Ranging from 48 TB to 1.25 PB, the new PowerProtect DD models deliver up to 30% more logical capacity and 65:1 data reduction. Rack space is reduced by as much as 39% and customers will have the option of grow-in-place expansion.

Secure


To meet growing security concerns, PowerProtect DD appliances support the latest update to our Cyber Recovery Solution by delivering a unique Cyber Recovery Vault. Additionally, Dell EMC PowerProtect Cyber Recovery now supports PowerProtect Software deployments, provides integration with third-party options, and UI improvements to simplify management.

Industry leading multi-cloud protection


PowerProtect DD carries forward support for all previous cloud capabilities including native tiering to the cloud, a broad ecosystem of backup applications for both public and private clouds, and cost-effective disaster recovery from AWS, Azure and VMware Cloud on AWS. Support continues for PowerProtect DD Virtual Edition in the public cloud to provide backup and replication in hybrid cloud environments.

Expanded multi-dimensional appliance portfolio


PowerProtect DD further strengthens the foundation of the Dell EMC multi-dimensional appliance portfolio, providing customer choice for expansion depending upon workloads. Scale-up with PowerProtect DD Series Appliances and Integrated Data Protection Appliances or scale-out with the PowerProtect X Series Appliances. PowerProtect DD is the preferred target appliance for PowerProtect Software and will continue to support the Data Protection Suite of products and solutions.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Online Exam, Dell EMC Learning, Dell EMC Guide

Finally, we are pleased to introduce the first enhancements to PowerProtect Software under our new quarterly release cadence. The new release will support Cloud Disaster Recovery with PowerProtect DD and Data Domain, new self-service restore capabilities for VMware admins, and through integration with Storage Direct, PowerProtect Software can leverage PowerMax snapshots to deliver high-performance backup and restore to minimize production impact.

Data management at global scale


As we look to the future of data protection and management, and the evolving requirements of our customers, we continue to extend our vision of scale. The next generation of scale will require capabilities beyond today’s expectations, we call this data management at global scale.

Data management at global scale will augment our software defined data management platform and multi-dimensional appliances and enable our customers to not only manage, protect and recover data efficiently, but with high performance, security, simplicity, and flexibility at exabyte scale. Delivered in phases, data management at global scale will provide unique capabilities for not only our appliance portfolio, but for multi-site and hybrid multi-cloud environments. Stay tuned for more.

Wednesday, 25 September 2019

Dell Technologies Brings the Power to Oracle OpenWorld

Every September, tens of thousands of IT professionals converge upon San Francisco for one of the largest technology events of the year, Oracle OpenWorld.

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Learning

Dell Technologies was at the event in full force this year as a partner with Dell EMC PowerMax storage. Here’s a look at what was most important to us at the show.

PowerMax and Oracle, the perfect pair


PowerMax, the world’s fastest data storage array, got faster last week with the addition of end-to-end NVMe, storage class memory (SCM) for persistent storage, and real-time machine learning to optimize Oracle workloads. PowerMax SCM powered by dual port Intel® Optane™ SSDs delivers up to 50% better response times compared with SAS flash (NAND), and is offered in 750GB and 1.5TB drive capacities.

PowerMax features high-speed smarts to power the most critical Oracle workloads. It also directly aligns to Oracle customer requirements for higher levels of performance, hyper consolidation, simplified storage, and real multi-cloud to truly modernize the data center.

This announcement builds on Dell EMC’s already impressive momentum in the high-end market. Dell EMC is the undisputed leader of the high-end storage market according to IDC, with a 43.9% share; nearly triple that of the next highest competitor.[1] PowerMax was also the winner of CRN’s 2018 Product of the Year and Tech Innovator awards.

The latest PowerMax announcements help customers better manage their high-value applications with particular focus on:

◈ Massive consolidation of multiple concurrent and mixed workloads, especially deployments that require great read response times during heavy write activity

◈ Latency sensitive applications that require the lowest response times

◈ Mission-critical real-time analytics apps like fraud detection, real-time marketing analytics

◈ High demand OLTP systems: trading systems, large scale billing apps

◈ Large ERP systems and demanding Healthcare deployments (EPIC Cache databases)

In addition, PowerMax automation support for Kubernetes, VMware and Ansible streamlines provisioning, replication and other storage management tasks. Whether building out an on-premises cloud or private data center, the vRO plug-in for PowerMax enables the automation and orchestration of storage provisioning and management tasks for PowerMax storage systems. Out-of-box workflow libraries include both low-level and VMware-integrated workflows to enable customers to use the workflows as is or customize them to meet their automation needs.

But what good is performance if you can’t get to the applications? Like many companies, Dell Technologies runs many of its own applications on Oracle. So integrated features like true “active-active” Oracle stretch clusters for real disaster tolerance, non-distructive migrations, deduplication, compression, and data “snaps” for team application development and testing keep business momentum moving quickly as well.

Dell Technologies and Oracle Cloud


As we saw this week at OOW, VMware is looking to bring new cloud support to Oracle with VMware Cloud Foundation on Oracle Cloud. As part of this announcement, an Oracle Cloud VMware solution will be supported by Oracle and its partners. The solution will be based on VMware Cloud Foundation and will deliver a full-stack software-defined data center (SDDC) including VMware vSphere, NSX, and vSAN.

Dell Technologies Cloud built on VMware Cloud Foundation provides Oracle customers with true hybrid cloud experience in a multi-cloud world, delivering consistent management, operations, flexibility, ease-of-use, and full life cycle management of hardware and software components.

Moving Forward


Organizations of all sizes are striving to achieve a cloud operating model in their IT transformation journey. Multi-cloud environments are a reality. In fact, IDC estimates more than 70% of companies are using multiple cloud environments. Dell Technologies understand that real business moves ahead with an agile infrastructure strategy without leaving existing workloads behind. Now the real work begins. Look to Dell Technologies to explore and bring the latest tested and proven products, solutions, and strategies that enable real transformation from the edge to the data center core to the cloud.

Tuesday, 24 September 2019

Survival of the Fittest: Full-Stack Management

Ecosystems are complex and fragile. Just look at how climate change is affecting the world. From shifting weather patterns that threaten food production, to rising sea levels that increase the risk of catastrophic flooding, the impacts of climate change are global in scope and unprecedented in scale. Ecosystems are in a constant state of change. The organisms inside must quickly adapt, or risk extinction. Take polar bears for example, they are forced to find new ways to forage for food because the ice they hunt on is melting rapidly. Animals gravitate to what will give them the best chance of survival in their ecosystem.

This constant flux reminds me of our very own ever-evolving IT industry. As an IT service administrator, you contend with multiple disconnected tools and information silos. This offsets your entire ecosystem and threatens your survival. Luckily, Dell EMC has the OpenManage Ecosystem portfolio of integrations and APIs to help you succeed. The integrations and APIs in this ecosystem seamlessly break down information silos and give you full-stack management of your virtual and cloud infrastructures.

Dell EMC partners with industry leading vendors to provide integrations that help you streamline your IT administration, enabling comprehensive oversight and deeper control of your own ecosystem.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorials and Materials, Dell EMC Online Exam

Streamline IT administration


Did you know that nearly 70% of your time is spent maintaining existing IT environments? You can better respond to challenges in your environment through automation. The OpenManage ecosystem portfolio of integrations and APIs helps you streamline your IT administration with native full-stack control from a single interface. The OpenManage integrations provide IT administration automatically in one user-friendly console. This streamlines your IT administration because you no longer toggle between screens and input data manually. This reduces your risk of human error, further streamlining your administration.

The OpenManage Integration with VMware Vcenter is a prime example of how the OpenManage Ecosystem Portfolio of integrations and APIs streamlines IT administration. It enables bare-metal server provisioning from within vCenter. Instead of using complicated server provisioning tools and processes, the OpenManage Integration employs hardware profiles. These profiles streamline deployment and configuration.

Another example of streamlined IT administration is the OpenManage Integration with ServiceNow. The integration leverages the OpenManage APIs to automatically import open cases from SupportAssist Enterprise as incidents into ServiceNow. You no longer need to create a separate incident into your ServiceNow instance to track a Dell support request raised against your PowerEdge server. For example, if one of your fans fails in your PowerEdge server, SupportAssist Enterprise will create a Dell Support ticket based on the alert that it receives from OpenManage Enterprise. It will also dispatch a replacement fan the next business day. Additionally, the integration allows you to manage your events and incidents automatically. No longer spend countless hours manually entering information into a separate console.

In the same vein of efficiency, you can provision a dynamic infrastructure in a matter of seconds rather than days by simply running software commands on OpenManage Ansible Modules. These modules enable you to use Red Hat Ansible to automate and orchestrate the provisioning, configuration, deployment, and update of your PowerEdge servers. You can unite workflows into a single pipeline, increasing your administration efficiency.

Comprehensive oversight


Did you know 75% of downtime is caused by manual and disconnected IT processes? You can better adapt to challenges if you have better insight into your IT administration. The OpenManage ecosystem portfolio of integrations and APIs enables comprehensive oversight by facilitating the integration of management information into one easy console.

Take the OpenManage Integrations for Microsoft System Center for example. These integrations provide visibility and control of hardware infrastructure, operating systems, and virtual machines. Furthermore, the OpenManage Integration with Microsoft Windows Admin Center simplifies discovery of PowerEdge servers and Dell EMC Solutions for Microsoft Azure Stack HCI. This integration provides centralized access to each of the servers and clusters in your environment. It can be used exclusively for server lifecycle management, health status, monitoring, and troubleshooting.

“The Dell EMC OpenManage Integration with Microsoft Windows Admin Center gives us full visibility to Dell EMC Solutions for Microsoft Azure Stack HCI, enabling us to more easily respond to situations before they become critical. With the new OpenManage integration, we can also manage Microsoft Azure Stack HCI from anywhere, even simultaneously managing our clusters located in different cities.”
                                                                                               – Greg Altman, Swiff-Train Company

Deeper control


Did you know 65% of IT decision makers are deploying multi-cloud solutions? With so much of your IT in multi-cloud, it’s important to choose a solution that puts you in control. The OpenManage ecosystem portfolio includes user-friendly open RESTful APIs that give you control of your environment. You can stack the APIs easily in one script for aggregated operation in a multi-device and multi-vendor environment. They also use common scripting languages and support the DMTF Redfish standard. Furthermore, Redfish composability APIs were recently added to the PowerEdge MX Kinetic infrastructure. This enables full storage composability through VMware Cloud Foundation. You can control your workloads to provision storage as needed. This gives you full lifecycle management of the hardware and software within your cloud infrastructure.

Don’t risk extinction. Defeat challenges in your ecosystem. Overcome information silos with the Dell EMC OpenManage Ecosystem portfolio. The ecosystem of integrations and APIs streamlines your IT administration, helping you gain comprehensive oversight and deeper control. Learn which OpenManage integration is right for your environment.

Monday, 23 September 2019

Advancing the (Technological) Boundaries of Creativity at SIGGRAPH 2019

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certifications, Dell EMC Online Exam

Dell Precision is embedded within the creative industry. We’re honored that our workstations have powered visual effects in award-winning shows and blockbusters and we are committed to supporting industry partnerships and key events.

To showcase our wide range of solutions for the media and entertainment industry, this year we are also bringing specialists from across Dell Technologies to SIGGRAPH 2019. Many of our workstations customers also use Dell UltraSharp displays and Dell EMC server and storage solutions to power render farms and help keep studios’ IP secure. Once creations are ready, Alienware and Dell consumer devices are also the perfect platform to enjoy or test movies, games and much more.

As SIGGRAPH 2019 sponsors, this week you’ll see our solutions and specialists around the show:

◈ Dell industry and technical experts will be available on the Dell booth (#727). Visit to learn more about how Dell can support your workflow

◈ 65+ Alienware systems and Dell monitors will power the Immersive Pavilion & VR Theater

◈ Creator workflow demonstrations and portfolio showcases will be featured on the Dell booth. You can also check out Dell products at TechViz, NVIDIA, Blender, Boris FX, Blackmagic design, and Foundry locations

Visitors will also get an opportunity to hear from Dell customers such as DNEG, Animal Logic and Cinesite who are featuring across the show

Creators wanna create


We work closely with the creative community to understand user workflows and needs. Something that has not changed in the last two decades is that customers simply want to create in real time.

Creators want to bring their ideas to life instantly and expect technology to keep up. Dell’s broad workstation portfolio can be configured to support all levels of users. From animators working on 2D projects to the powerhouse creators who need 52 cores of power and VR Ready capabilities at their fingertips. Yet most of our customers don’t want to have to worry about speeds and feeds or software compatibility – they want a smooth experience and a worry-free creative environment.

For this reason, Dell has built strong relationships with professional ISV partners. We carry out thorough tests to ensure that the creative applications our customers use every day are certified and optimized to work on Dell workstations. We’ve also created unique tools like the Dell Precision Optimizer which automatically customizes your system settings for the best application performance. It’s a dance of technology and creativity.

Beyond #SIGGRAPH2019: Helping creators “thrive”


The creative landscape has changed significantly in the 22 years since we launched Dell Precision. Visual effects have drastically improved, and photo-realistic rendering is now common place.

Visual effects studios are constantly raising the bar when it comes to graphics. That’s why we were particularly excited about this month’s launch of the Dell Precision 7540 and Dell Precision 7740 mobile workstations featuring up to NVIDIA RTX GPU options.

Dell’s most powerful mobile workstations went on sale on Dell.com on July 9 and feature the latest Intel® Xeon® E or 9th Gen Intel® Core™ processors. These NVIDIA RTX Studio mobile workstations use NVIDIA Turing architecture to provide GPU acceleration for real-time ray tracing, 8K Red Raw playback and artificial intelligence and machine learning functions. These features, captured in stunning, sleek 15” and 17” designs mean that customers can create on-the-go like never before.

We’re happy that our partners also recognize the importance of the creative community. And we’ll continue to drive technology innovations to support the industry.

We urge our customers to continue to push the boundaries of creativity. And if you’re in LA this week we look forward to seeing you at the Dell booth!

Sunday, 22 September 2019

Dell Technologies Brings the Power to Oracle OpenWorld

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Online Exam

Every September, tens of thousands of IT professionals converge upon San Francisco for one of the largest technology events of the year, Oracle OpenWorld.

Dell Technologies was at the event in full force this year as a partner with Dell EMC PowerMax storage. Here’s a look at what was most important to us at the show.

PowerMax and Oracle, the perfect pair


PowerMax, the world’s fastest data storage array, got faster last week with the addition of end-to-end NVMe, storage class memory (SCM) for persistent storage, and real-time machine learning to optimize Oracle workloads. PowerMax SCM powered by dual port Intel® Optane™ SSDs delivers up to 50% better response times compared with SAS flash (NAND), and is offered in 750GB and 1.5TB drive capacities.

PowerMax features high-speed smarts to power the most critical Oracle workloads. It also directly aligns to Oracle customer requirements for higher levels of performance, hyper consolidation, simplified storage, and real multi-cloud to truly modernize the data center.

This announcement builds on Dell EMC’s already impressive momentum in the high-end market. Dell EMC is the undisputed leader of the high-end storage market according to IDC, with a 43.9% share; nearly triple that of the next highest competitor. PowerMax was also the winner of CRN’s 2018 Product of the Year and Tech Innovator awards.

The latest PowerMax announcements help customers better manage their high-value applications with particular focus on:

◈ Massive consolidation of multiple concurrent and mixed workloads, especially deployments that require great read response times during heavy write activity

◈ Latency sensitive applications that require the lowest response times

◈ Mission-critical real-time analytics apps like fraud detection, real-time marketing analytics

◈ High demand OLTP systems: trading systems, large scale billing apps

◈ Large ERP systems and demanding Healthcare deployments (EPIC Cache databases)

In addition, PowerMax automation support for Kubernetes, VMware and Ansible streamlines provisioning, replication and other storage management tasks. Whether building out an on-premises cloud or private data center, the vRO plug-in for PowerMax enables the automation and orchestration of storage provisioning and management tasks for PowerMax storage systems. Out-of-box workflow libraries include both low-level and VMware-integrated workflows to enable customers to use the workflows as is or customize them to meet their automation needs.

But what good is performance if you can’t get to the applications? Like many companies, Dell Technologies runs many of its own applications on Oracle. So integrated features like true “active-active” Oracle stretch clusters for real disaster tolerance, non-distructive migrations, deduplication, compression, and data “snaps” for team application development and testing keep business momentum moving quickly as well.

Dell Technologies and Oracle Cloud


As we saw this week at OOW, VMware is looking to bring new cloud support to Oracle with VMware Cloud Foundation on Oracle Cloud. As part of this announcement, an Oracle Cloud VMware solution will be supported by Oracle and its partners. The solution will be based on VMware Cloud Foundation and will deliver a full-stack software-defined data center (SDDC) including VMware vSphere, NSX, and vSAN.

Dell Technologies Cloud built on VMware Cloud Foundation provides Oracle customers with true hybrid cloud experience in a multi-cloud world, delivering consistent management, operations, flexibility, ease-of-use, and full life cycle management of hardware and software components.

Moving Forward


Organizations of all sizes are striving to achieve a cloud operating model in their IT transformation journey. Multi-cloud environments are a reality. In fact, IDC estimates more than 70% of companies are using multiple cloud environments. Dell Technologies understand that real business moves ahead with an agile infrastructure strategy without leaving existing workloads behind. Now the real work begins. Look to Dell Technologies to explore and bring the latest tested and proven products, solutions, and strategies that enable real transformation from the edge to the data center core to the cloud.

Saturday, 21 September 2019

Dell EMC and AMD Partnered to Enable Business Outcomes at the Speed of Innovation

In a modern data center, workload placement should be driven by business strategy – and it is no surprise that 93% of customers are deploying their workloads across 2 or more clouds. Where hardware investments are being made, we find that businesses are purchasing new equipment for emerging workloads 67% of the time. This dynamic landscape requires innovations to place the right workload in the right cloud deployment.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC PowerEdge

Dell EMC and AMD partnered together to address the requirements of a multi-cloud world. Dell EMC realized that the 2nd Generation AMD EPYC TM processors offered outstanding performance, faster memory and I/O bandwidth and the security features to handle complex workloads.

So we went to work. We leaned on our server market leadership and engineering excellence to reimagine a new kind of server. A server that would take the full advantage of new technologies and deliver what we know how to do best, servers that what would offer optimal performance, effortless data center management and integrated security.


Dell EMC is proud to bring you the new Dell EMC PowerEdge servers with the 2nd Generation AMD EPYCTM processors purposely designed to address the most complex requirements, regardless of workload.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC PowerEdge

Based on internal testing we are seeing some phenomenal results. Take for example the PowerEdge C6525. This server delivered weather modeling results in half the time as previous generation AMD EPYC servers for faster severe storm notification. Performance we can rely on, especially during hurricane season. In addition, the Dell EMC Ready Solutions for HPC now include the PowerEdge C6525. The Dell EMC Ready Solutions for HPC simplify and shorten the time it takes to design and configure HPC systems built to execute compute-intensive tasks in real-time.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC PowerEdge

To give you a sense of what to expect from these new servers, here is a statement from Purdue University regarding their experience:

“Artificial intelligence and machine learning applications are key ingredients in the Purdue Integrative Data Science Initiative’s goals of enabling future giant leaps and educating students in data science. These applications also are tremendously compute-intensive. The compute capacity of the new Dell EMC PowerEdge servers with 2nd Generation AMD EPYC processors can handle these workloads. They offer us additional, cost effective options for delivering the necessary capabilities to our community cluster program’s faculty partners.”

These Dell EMC PowerEdge servers are built with the same care and diligence that has made PowerEdge the bedrock of the modern data center. These servers leverage the following Dell EMC leading-edge innovations:

◈ Multi-Cloud Environment: We are implementing the capabilities of the AMD EPYC processors into our cloud offerings as well. Dell EMC is including the vSAN Ready Node building blocks, the Dell EMC VXrail product line, and VMWare Cloud Foundation with the Dell Technologies cloud certification.

◈ Emerging Workloads: We ensured that our HPC solutions incorporate the new technologies of the 2nd Generation AMD EPYC. In the Dell EMC HPC Innovations lab, we have built a 64-node cluster with PowerEdge C6525 servers. This solution delivers 4,096 2nd Generation AMD EPYC cores. This unique facility provides a sandbox for our customers to test and develop applications, while taking advantage of Dell EMC’s HPC / AI expertise.

◈ Accelerated Performance: These PowerEdge servers are designed to take full advantage of the 2nd Generation AMD EPYC for compute intensive and bandwidth hungry applications like AI / ML / DL. We optimized the PCIe risers and storage backplanes to leverage every PCIe lane available in the server. This provides ultimate flexibility, whether you need high performance PCIe slots like GPUs in a 2U server or 24 Direct NVMe for lowest latency storage applications.

◈ Effortless Management: The Dell EMC OpenManage Ecosystem delivers integrations like VMware Cloud Foundation and vCenter via OpenManage Redfish Composability APIs. When installed in the cloud, any cloud, customers can easily manage the servers, the OS and the hypervisors from a single screen. Customers who leverage our management solutions have reported significant reductions in server updates and deployment times. One customer for example, reported that OpenManage made it possible to update and manage 300 servers in 30 minutes. A job that used to require 20 days to complete without the OpenManage tools.

◈ Integrated Security: PowerEdge servers are cyber resilient by design. Dell EMC end-to-end security such as the iDRAC based silicon root of trust and signed firmware updates helps protect against malicious activities at both the hardware and the firmware level. The new servers support AMD’s Secure Encrypted Virtualization (SEV) and Secure Memory Encryption (SME) to add additional layers of security at the hypervisor and at the OS levels.

At Dell EMC we are determined to enable business outcomes at the speed of innovation, that way customers can focus on what matters to them: their business priorities. The Dell EMC PowerEdge servers with the 2nd Generation AMD EPYC deliver accelerated performance, effortless management and integrated security. These servers bring innovations to our customers, so they can deploy their traditional, emerging and/or complex workloads in a multi-cloud world. But our work is never finished. We are here to support our customers even before they need us. With ProSupport Plus and SupportAssist, customers can resolve issues with up to 72%less effort.

Thursday, 19 September 2019

Is it Time for NICs to get Smarter?

There has been a lot of talk in the industry in last couple of years around SMART-NICs. In this blog, we share our perspective on this new class of network accelerators and the role they play in the future of compute platforms. SMART-NICs are expected to play a key role in compute platforms for large Data Centers, Edge and Telco 5G environments.

There is a general trend in the industry around building compute platforms that consist of general purpose CPUs coupled with dedicated accelerators (SOCs and ASICs). The industry also refers to these as Domain Specific Architectures, where a host processor is used to set things up and accelerators do the compute processing for the specific problem domain. A number of factors are driving the adoption of this hybrid computing architecture:

1. The increasing number of cores in the CPUs have improved the CPU performance, but memory and I/O bandwidth improvements haven’t kept up with it. Combining CPUs with accelerators moves some of the processing to accelerator cards without transferring the data back and forth between memory, IO and CPU.
2. As network speeds and disk performance are increasing, some of the associated processing (network services and data processing / analytics) is moving closer to network in the form of FPGAs to avoid sending un-necessary data to host memory.
3. Telcos are moving towards virtualizing the network edge with evolution to 5G. Hardware acceleration will play a key role in the 5G architecture for network services offload, 5G network slicing and real time data processing.

SMART-NICs are a class of accelerators which consist of a standard NIC (Network Interface Controller) combined with FPGA and CPU cores (ARM or x86 cores), as shown in Fig 1. These are expected to play a key role in future system architecture because most of the infrastructure services and applications are or will be network connected. Some examples of infrastructure services are network services (virtual switches, firewalls, load balancers, Telco virtual network functions, SD-WAN), storage services (SDS software for block, file, object storage), analytics, and machine learning. These infrastructure services moved to software defined architectures with SDN (Software Defined Networking), SDS (Software Defined Storage) and distributed data analytics (Hadoop, Spark).

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Online Exam

As network speeds are increasing, there is a need to move the associated network processing from host CPU to network adapters in order to keep up with data rates and reduce the amount of data sent to IO bus and host memory for processing. Hypervisor resident virtual switches provide a number of functions including data movement, virtual switching overlay, encryption, deep packet inspection, load balancing and firewall. It is hard to scale these features to future network speeds of 50/100/200G. FPGA and NIC on the SMART-NIC (Fig 1) enable this data plane offload to scale to higher throughput, lower latency and higher packets per second (pps) performance. Other higher level network services and Telco virtual network functions (VNFs) are also starting to leverage the FPGA and NIC ASIC to offload data plane processing and new features like network slicing for 5G.

Some of the data plane functions also benefit from moving associated control plane closer to the data plane, leading to embedded CPU cores on SMART-NICs. What are the benefits of doing this?

1. Support of any OS running on the host CPU and enablement of bare-metal containers.
2. Stronger security – embedded signed images can be delivered for software on SMART-NIC and it is independent of any security attack on the OS or applications running on host CPU.
3. Software running on the SMART-NIC can isolate a server from rest of the network if a security threat is detected.

The offload of network functions via SMART-NIC enables opportunity to further optimize the data-flows in overall compute system as shown in Fig 2. Software running on SMART-NIC can enable direct data transfers to server storage and GPUs without using host memory as a staging area for data transfer, thus improving performance, reducing latency and freeing up host CPU cycles.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Online Exam

A number of industry level standardization efforts are needed to develop open APIs so that a SMART-NIC from any vendor can be used to accelerate any workload such as:

1. Standardization of data plane interface for applications to offload data plane processing.
2. Standardization of interfaces for software life cycle management of SMART-NICs.
3. Standardization of hardware management and monitoring of SMART-NICs via DMTF Redfish interfaces.

These SMART-NICs will play a key role in compute platforms for both Data Center and Edge. In data centers, SMART-NICs enable workloads to scale to higher network speeds. They also free up the host CPU cores and reduce memory consumption and IO bus utilization by moving CPU and data intensive computing to hardware. In Edge deployments, SMART-NICs enable movement towards single socket servers instead of current dual socket platforms, and new features like network slicing for 5G, Telco VNF Acceleration, content distribution, image processing and Machine Learning Inferencing.

Due to importance of this hybrid architecture for next generation workloads, CPU vendors, namely Intel, has also evolved from processor point of view to a system point of view with investments in FPGAs, SMART-NICs, GPUs and Co-processors. This change will enable next generation of highly optimized Edge and 5G deployments to be based on x86 compute platforms and SMART-NICs coupled with high speed persistent memory and storage.

Dell Technologies is leading the innovation in future hybrid system architectures with FPGAs, SMART-NICs, GPUs and other SoC based multi-core accelerators while working on standardization of APIs and frameworks. Dell Technologies is partnering with Telecommunication Service Providers bringing leading technology on our journey to 5G.

Tuesday, 17 September 2019

Accelerating the Dell EMC Partnership with the ‘New’ Cloudera

The platform design paradigm from the early days of Hadoop has been to co-locate compute and storage on the same server, which requires expanding both in tandem as your Hadoop cluster grows. This is an acceptable approach for workloads that need to expand compute and storage simultaneously. But, as Hadoop gained mainstream adoption, enterprises started finding workloads where the need for storage outpaced the need for compute. And now, after a decade of big data, enterprises are finding that historical data sets, though accessed less frequently, still need to be easily accessible. This has brought forth new data architecture concepts, as many enterprises look to deploy solutions with independent scaling of compute and storage, plus the option to leverage object storage (in addition to HDFS storage) for Hadoop.

Dell EMC offers leading edge file and native HDFS storage product Dell EMC Isilon and distributed object storage product Dell EMC ECS. Since our partnership with Hortonworks and Cloudera began in 2015, we have been engaged in joint engineering and validation efforts to bring these enterprise shared storage solutions to both Hortonworks Data Platform (HDP) and Cloudera Data Hub (CDH).

These on-going efforts have proven critical in delivering differentiated shared storage solutions that encapsulate the concept of a consolidated data lake that scales data and compute independently, simplifies data management with non-disruptive growth from 10s of TBs to 10s of PBs in a single name space, delivers the flexibility to leverage HDFS and/or object storage, and makes it economical to store all of your data in a single place.

A Renewed Commitment


With the merger of Hortonworks and Cloudera, as well as Cloudera’s new streamlined Quality Assurance Test Suite (QATS) process for certifying both CDH and HDP with hardware vendors, we are excited to announce an accelerated partnership with Cloudera in validating and certifying both CDH and HDP with both Isilon and ECS.

“As the Hadoop landscape and storage requirements evolve, we’re excited to partner with Dell EMC to bring to market solutions backed by its leading edge unstructured data storage offerings like Isilon and ECS,” said Nadeem Asghar, VP of Solutions and Partner Engineering at Cloudera. “Dell EMC shares our commitment to ensuring our customers can always stay ahead of industry and technology trends and we look forward to delivering solutions to our customers for years to come.”

What Does This Mean For You?


This new investment strengthens the Dell EMC and Cloudera partnership allowing us to:

1. Continue to support our existing joint customers on existing and future hardware and software releases.
2. Bring shared storage model at scale with innovative and fully validated end-to-end platforms to support the growing Hadoop ecosystem.

Today, Dell EMC Isilon has been validated with HDP 3.1 and CDH 5.14. These solutions are supported by Dell EMC and Cloudera and will continue to be supported, via a joint support process. This process involves triaging the issue with the Hadoop solution, regardless of where it was discovered, and directing the issue to appropriate teams.

Over the course of next few months, we are contracted to work jointly with Cloudera to get Isilon certified through QATS as the primary HDFS store for both CDH (version 6.3.1) and HDP (version 3.1). In the same timeframe, we also plan to get Dell ECS certified through QATS as the S3 object store for both CDH and HDP.

Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Online Exam, Dell EMC Certifications

Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Online Exam, Dell EMC Certifications

What’s Next?


Beyond this, we plan to launch new joint Hadoop Tiered Storage solutions that enable customers to use Direct Attached Storage (DAS) for hot data and Shared HDFS Storage for warm/cold data within the same logical Hadoop cluster, simultaneously delivering extreme performance and economic scaling. We are also working closely with Cloudera product teams to align the Dell EMC Isilon and ECS product roadmaps with Cloudera’s product strategy for Cloudera Data Platform (CDP), the new Hadoop distribution that combines the best of breed components from both CDH and HDP.

Finally, Isilon’s capability as a data lake that can manage data for several Hadoop distributions simultaneously enables us to offer phased migration services from CDH or HDP to CDP. This simplifies the process and significantly minimizes business risk in migrating to the new Hadoop distribution. At Dell EMC, we plan to launch these migration services as CDP becomes available for on-prem deployment.

Monday, 16 September 2019

Is Data the New Crude Oil?

Let’s drill down into the metaphor.

Moor Insights & Strategy argues that an optimally tuned infrastructure is key to deriving all the rich benefits that go along with effective data management and analytics. They claim that data is the new crude oil and intelligence is the new gasoline, fueling business wins. If we break down this metaphor, it becomes clear that servers play a pivotal role in data management and analytics.

It starts with data. Lots of it. If you’re like most companies, you’re probably drowning in data. But raw data brings little value to your organization. It’s through processing and refining that data into intelligence where the value is created. Crude oil must be refined into gasoline to deliver value to the combustion engine. The same thing is true with data. Raw data must be refined into intelligence to achieve business outcomes and attain actionable insights.


If only this process were as simple as proceeding directly from Point A (raw data) to Point B (intelligence). There is an important middle step involving your IT infrastructure. You’re probably already aware of the power of popular data management and analytics applications such as Microsoft SQL Server and SAP HANA for making sense of the data chaos. There are many others that play a role.

However, what you may not have realized is that these applications are only as good as the hardware they run on. If apps are the industrial workers bringing order to your data, servers are the refinery juggernauts upon which the whole process relies. As Moor Insights puts it, “Without the right infrastructure, businesses will never realize the full benefits of real-time analytics.”

Recent ESG research bears this out: Organizations with modern servers and infrastructure are nearly 7x more likely than organizations with aging servers to report their analytics environments are “very effective” at driving business value. Businesses with modern servers are also 5.3x more likely to report that their research and development function is market leading. In the important process of refining data into actionable intelligence, servers matter.

Moor Insights outlines major infrastructure considerations you should keep top of mind:

◈ Processor core count and per core performance. More cores can process more data and fast performing cores crunch that data more quickly.

◈ Processor optimizations. These can provide noteworthy performance gains in data analysis.

◈ Memory bandwidth and memory capacity. How much data can be stored and how quickly it can be moved are a key factor.

◈ Location of data. The shorter distance data sets must travel to get to compute, the faster your intelligence can be gleaned and used as fuel for your business.

Dell EMC’s new eBook, Modern Servers are the Key to Organizing the Chaos of Data and Analytics, walks you through important infrastructure concerns as you pinpoint the best way to make business gold out of crude data. The eBook focuses on the technology behind the servers that are most optimized to process these heavy-hitting workloads. We also highlight several options for your IT shop designed to meet you where you are in your journey, all keeping in mind that without the proper “refinery” in your server room, you can’t refine data into insights quickly and accurately.

Saturday, 14 September 2019

5 Questions to ask Pure while at Pure//Accelerate 2019

Well it’s that time of year again. When fall arrives, so do many of the storage vendors annual conferences. As some of you may be aware, Pure Storage will soon be holding their annual ‘Pure//Accelerate 2019’ conference in Austin, Texas, very close to where Dell Technologies started ~35 years ago. This fact got me thinking about the different paths Dell Technologies and Pure Storage have taken in the recent years and the decisions we’ve both made to better serve our customers.

If I was a customer trying to decide which company would be a better technology partner for years to come, I would have several questions. So, I decided to write down some of the questions I think customers should be asking of Pure Storage at their conference:

When will FlashArray be able to ‘scale out’?


Pure Storage’s current FlashArray architecture uses a scale-up ONLY, dual controller, active-passive, design. Dell Technologies has storage platforms in our portfolio that can scale up and/or scale out to meet customers’ requirements. We believe that our portfolio provides more choice and a good opportunity for customers to match the right platform for the workload and use cases they need. I am interested in hearing Pure’s point of view on the benefits of a scale-up only architecture in the enterprise market when faster media types (like NVMe and now SCM) have started to shift the performance bottleneck back to the controller.

Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC Study Materials

Will Pure continue to use proprietary flash modules or adopt industry standard media?


Pure has developed its own proprietary NVMe modules as part of their FlashArray architecture. If I were a FlashArray customer I would want to understand the risks of committing to a proprietary technology vs. leveraging industry standard NVMe drives. If you believe the past is a good guideline, then proprietary systems don’t have a good track record. Also, I’d like to understand how the innovation cycle of their NVMe modules will match the rest of the industry’s innovations. Last, but not least, what are the implications on supply chain management and availability in the long run for their proprietary components. If/when Pure decides to add SCM (Storage Class Memory) support, will that be a proprietary design also?

When will Pure be adding intelligent storage tiering to their platforms?


With new and faster media types coming to market – such as SCM (Storage Class Memory), QLC (Quad Level Cells) drives, etc – customers will be able to place their most critical data according to the media type IF the array has intelligent tiering, like we have today at Dell Technologies (as an example, PowerMax just launched full support for SCM media, see the launch video here). If Pure releases SCM and/or QLC in their platforms, will they also be adding intelligent tiering, or any tiering at all, so customers can get the most from their investment without being bottlenecked by the slowest media installed?

Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC Study Materials

What is the future of FlashBlade given the recent acquisitions Pure has made (e.g. Compuverde)?


It would be interesting to know how many customers are actually buying FlashBlade arrays and what the actual use cases are. I have seen it positioned as File and Object primary storage, but also as a backup target with fast restore. In addition, are there any plans to improve FlashArray file services capabilities with the acquisition of Compuverde? If so, what are the implications on the future of FlashBlade as a File and Object primary storage array?

Does paying a premium for Evergreen Gold to get free controllers every three years ALSO provide a performance improvement guarantee from the new controllers?


As other storage vendors have also noticed, Pure’s performance improvements from one controller generation to the next have not been as big as one might have expected. Keep in mind that if you bought into the Evergreen Gold messaging then, by the time you get your new controllers you have already paid for 6 years of Evergreen Gold, whether the performance benefit turns out to be 50% or only 5% (remember, you don’t get your ‘free’ controllers until you renew your 3-year Evergreen Gold contract for another 3 years). We feel this is something customers need to consider asking if they will be committing to a 6-year support program like Evergreen Gold.

Enjoy your time in Austin


We hope you have a great time in our hometown of Austin. I want to take the opportunity to encourage you to reach out to your Dell Technologies representative if you would like to learn more about our solutions while you are in town, we will be happy to meet with you.

Thursday, 12 September 2019

AI Scaling and Other Musings

Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are more than just flare for an expo booth. Companies are transforming their business practices, driving productivity, and creating new opportunities with the power of data and AI. Faced with limited resources, organizations must decide whether to scale their AI processes up or out. This key decision will better leverage the talent of an organization’s data scientists, capturing operational efficiencies, transforming decision making, and delivering stronger business results.

So, what are AI, ML, and DL, and how are they related? DL is a subset of ML, and ML is subset of AI – clear as mud right? AI can independently sense, reason, act, and adapt. ML and DL use mathematical algorithms to ‘learn’ as opposed to being explicitly told what to do as in expert systems (imperative programming using a lot of ‘if, then else’). ML techniques use a variety of algorithms to create mathematical models which can then be used to predict an output based on some new input. DL structures algorithms in layers to create an artificial “neural network” which can learn and improve with vast amounts of data to make a better output (a decision) based on some new input.

Some characteristics of ML and DL are shown below in Figures 1 and 2:

Figure 1

Figure 2

As an example of ML and DL, think of the problem of predicting someone’s weight (Figure 3). It could be a simple data set with height and weight then, using ML, you come up with line of best fit. Conversely, you could have tons of image data labeled with weight, then DL could refine the model with many more considerations, such as facial features, therefore improving accuracy.

Figure 3

The fuel of AI is lots of data; advanced models, algorithms and software; and high-performance infrastructure. As a general rule, if you are fortunate and have all three fuels, your organization should use DL. If you have two out of three, one, or none, use ML.

The AI journey is a challenge. When we talk to our customers, they cite limited expertise, inadequate data management, and constrained budgets. Their business challenge is threefold: Define, Develop, and Deploy. They must define the use cases, requirements, business goals for artificial intelligence. Then, they must develop the data sources, infrastructure, trained models, and refinement process surrounding an AI system. Finally, they need to deploy the AI-enabled systems with tracking tools and inferencing, at scale.

Now that we have covered the seemingly daunting challenges of implementing AI, let’s dive into the upbeat topic of a scale up or scale out strategy for AI. Scaling up implies adding more resources to an existing system. In the case of AI, that means how many domain specific architecture elements (DSA) are in an individual server. DSA is becoming the code name to mean anything that accelerates beyond general purpose computing, such as GPUs, ASICs, FPGA, IPUs, etc.  Scaling out, however, means adding more systems, each with some number of GPUs. Think of this in server terms as a 32-socket server vs 2-socket server. This is like the 30+ year old argument of mainframe vs x86 scale out. Scale out economics emerged the clear victor.

For example, consider a family with independent transportation needs. The family could buy a Hennessy Venom GT – the fastest car on the planet – and one by one they could share it to rapidly meet their transportation needs. Or, they could buy n number of sufficient-performance cars and meet all their needs all the time.

Figure 4

Just as the family will simultaneously need to take concurrent trips, corporations on the AI journey will need more than one data scientist. Our job in technology is to make all data scientists productive all the time.

The first set of data below uses MLperf (MLperf builds fair and useful benchmarks for measuring training and inference performance of ML hardware, software, and services) across the Dell portfolio of domain specific architecture-enabled (DSA) servers. In Figure 5, you can see the scaling from 2 V100 GPUs in the Precision 5820 to 4 V100 in a PowerEdge C4140 to 8 V100 in the DSS 8440 to 8 in an NVLink-enabled server. The 8-NVLink server performs slightly better than the 8-PCIe-enabled DSS8440 (note the DSS8440 can support ten V100’s).

Figure 5

If you look closer at the individual 8-GPU scores below in following two figures, the scores are very close with exception of translation and recommendation for the two different 8-GPU cases. That said, the performance gap has closed between NVLink vs PCIe with proper PCIe topologies and better PCIe switch peer-to-peer support, and it will close more with PCI data speeds of Gen4 and Gen5 which is 16GT/s and 32GT/s respectively vs Gen3 which was 8GT/s.

Figure 6

Figure 7

If we take the same MLPerf, compare an 8-NVLink server to 2x 4-node NVLink servers, and spread the sub benchmarks across the two servers, we can now get all the work done faster than a single 8-NVLink server (Figure 8).

Figure 8

This checks out because, according to Amdahl’s law, the speed-up is not linear in large scale parallelism. Thus, it is important to pick a point on the speed-up scale before it gets into the non-linear region and to stay in the scale out economic sweet spot. Figure 9 shows scaling for 1-to-4 and 1-to-8 GPUs, as you can see, 1-to-4 GPU scaling has higher efficiency across the board (Figure 9).

Figure 9

Mark Twain popularized the saying, “There are lies, dammed lies, and statistics.” If he were still a journalist in the Bay Area today, he would turn our attention toward the pitfalls of benchmarks. Benchmarks in the right context are fine, but reduced to simple metrics, benchmarks can cause serious problems. Leaders must understand the underlying drivers of benchmarks to make informed decisions. Continuing our scale out or in example, 2x scale out nodes outperform a single scale up node using MLperf, the best industry benchmark. The scale out C4140 is also the best MLperf-per-dollar option for AI, which reflects the optimization point of using economic scale out nodes. Additionally, if you noticed in the news, VMware recently announced the acquisition of Bitfusion, which is a scale out approach to elastic AI infrastructure.

Now, this is not to say 8 GPU+ systems don’t have their place, just like we still have 8 to 32-socket x86 servers, they have their place and need. But there is an economic law and Amdahl law at play which suggest that the economics of scale out win except in the corner cases. Scaling GPUs is impacted by two main variables: data size and optimized code. If the data set is too small and scaling inefficiently with more GPUs, you end up shuffling parameters between GPUs more than doing computation on the data. Most models and use cases in practice for DL in enterprise will be best run on one GPU because of dataset sizes and quality of code. AI-proficient companies have hired skilled “ninja” programmers due to use cases/datasets that require training across 8+ GPUs to meet time-to-solution requirements. Everybody else, good luck with getting better performance on two or more GPUs than a single GPU.

The fact is this, companies on an AI journey will have tens or hundreds of data scientists, not just one. It is our job in technology to make sure all data scientists are as productive as possible. If one data scientist finishes her job fast while the others are drinking coffee and waiting for their turn, the company is not productive. Much like buying a super car to share with your spouse, only one person can be happy at a time. As monk and poet John Lydgate wisely wrote, “You can please some of the people all of the time, you can please all of the people some of the time, but you can’t please all of the people all of the time.” Well, in this case you can make all the data scientist happy all the time by enabling productivity to all, all the time – by investing in scale out and proper GPU workload orchestration.