Wednesday 30 January 2019

Discover All the Opportunities around Dell EMC Microsoft Solutions

Learn how three compelling events will help you drive customer conversations around the need to modernize their Microsoft application environments.

Dell EMC Guides, Dell EMC Study Materials, Dell EMC Certification

Have you investigated the key reasons why you should be embracing and driving customer conversations around new Dell EMC Microsoft Solutions? We’ve now produced a full suite of training and sales enablement materials to help you get up to speed.

There are three compelling events that will invariably help you drive business opportunities:

1. Cloud strategy as an essential aspect of Modernized Infrastructure


Every one of your customers is considering how the cloud can help their business, and many have already moved some applications and development activities onto Microsoft Azure Public. However, many (if not most) Microsoft Azure Public users are actually looking for a hybrid cloud option – because of data gravity or application alignment.

You can help them as they begin to work with the Microsoft Azure stack. How? By introducing them to Dell EMC VxRack AS for Microsoft Azure Stack, which brings the cloud to the workload, delivering infrastructure and platform as a service with a consistent Azure experience – both on-premises and in the public cloud.

Proactively driving the dialog around optimizing development environments in a hybrid cloud model leads to logical conversations around the customer’s on-premise infrastructure status, as well as the key business applications, such as Microsoft SQL.

It’s a compelling conversation and provides an excellent pivot point for further discussions around modernization and migration.

2. Hyper-converged infrastructure helps drive tangible operational benefits


Are your customers using hyper-converged infrastructure (HCI) yet? With approximately >60% CAGR through 2019, HCI is not just a trend – it’s a phenomenon. Customers are realizing tremendous operational and performance benefits from hyper-convergence, and Dell EMC has the industry’s broadest HCI portfolio.

Our Solutions for Microsoft Windows Server Software Defined (WSSD) / Storage Spaces Direct (S2D) deliver all the performance capabilities of Microsoft’s unique HCI design, with additional one-stop support – from the hardware, all the way to the software stack.

What’s more, with the added cost benefit of free inclusion with Windows Datacenter Edition licenses, S2D eliminates the investment considerations – making it the perfect solution for customers who have already embraced Microsoft business applications, and a great recommendation for you to suggest to customers who haven’t yet tried it.

3. The SQL Server 2008 end of support opportunity can accelerate application migrations


Microsoft SQL is the most widely deployed database management solution on the market – approximately 3 times more broadly deployed than all other DBMS combined. So your customer’s SQL environment status should always be a topic of conversation.

This is a major, compelling event – it affects 90% of organizations currently running SQL and is a massive opportunity for you as it will force many organizations to migrate to the latest version.

You can provide significant business value by helping customers migrate to modernized infrastructure, with a modern operating system running on modern hardware and the most recent version of Microsoft SQL Server.

Become a trusted advisor for the entire Microsoft data estate


Whether they’re on their journey to Microsoft Azure cloud utilization, reaping the benefits of HCI modernization via Microsoft WSSD/S2D, or leveraging the enhanced data analytics and expanded business benefits delivered by Microsoft SQL, the Dell EMC Microsoft Solutions portfolio addresses your customer’s entire Microsoft data estate.

These new pre-tested, pre-validated and fully certified Solutions help you support your customers on their journey towards Modernized Infrastructure, while also positioning you as a trusted advisor for optimal migrations and integrations.

The above events will necessitate new infrastructure to support the core Microsoft applications. As a Dell EMC partner, you’re ideally positioned to guide your customers through their options, arriving at an optimized solution that’s specifically designed to deliver maximum operational and performance value.

How Real is AI?

Every few years a new-hype technology becomes the shining star.  It sucks all the oxygen out of the room and becomes the headline darling for a while. The list of technologies that move the needle forward on managing the business better or help to grow a business are impressive.  They provide great gains, but sometimes have a shelf life.  This might be due to an eventual lack of impressive results. But more often, they are displaced by a newer, shinier technology that simply does their job better, faster and stronger. Business management software is but one example market that is no stranger to, and beneficiary of, new tech, and the gutter is littered with market losers that failed to adapt.

Dell EMC AI, Dell EMC Study Materials, Dell EMC Tutorial and Material

This initially sounds like fear-mongering journalistic headlines, but they have a few very fine and accurate points.  They aren’t claiming AI is the end all be all, but are a natural progression.  All the latest shiny tech tools have matured over time to build the AI tools that exist today.  They are natural accretions instead of massive replacements.  On the flipside, these risks could be disruptive, dilutive, or outright company ending.

Are you ready to start with AI now? Or are you planning to wait?

My recommendation? Start now.

Now, let me tell you why.

First, AI systems are accretive.  One doesn’t wake up one day with a PhD, completing Ironman Kona, or implementing fully successful AI systems.  These things take time.  There is a life cycle similar to the crawl, walk, and run analogy of how companies can move from introducing AI, implementing successful models, and then building a fully successful scaled AI implementation.  Scaled AI implementations aren’t a few Data Scientists with a handful of GPUs that sporadically create successful models that implement one off results.  There is a holistic environment where Data Scientists and Data Engineers can label and wrangle data, visualize results, create models, and review efficacy of those models at scale.  This might be multiple models trained off of large data sets per day for each Data Scientist and constant wrangling by Data Engineers.

The latest #RapidsAI announcement from Nvidia found here targeting GPUs for Data Science and Engineering is certain to provide yet another disruption in the changing environment.   It is another game-changing evolution that is modernizing Data Science and Data Engineering tools. The efficiency of accelerating and scaling Data Engineering tasks with GPUs will take time to roll through environments, however.  This is part of the accretive nature of building the large environments.  Like Rapids, there will always be a next new innovation.

Second, Governance doesn’t happen overnight.  Do not confuse governance of AI and advanced analytics with regulatory related oversite.  They are complimentary, but not exclusive.  Good governance of advanced analytics includes reviews for bias (or the chance a model has some inherent incorrect assumptions and affiliations), for outdated models (where the demographic or historical data set on which is was built has materially changed), and for alignment (where the model and results are in line with the companies mission and don’t broadcast an incorrect statement about the company).

In my work with the automotive industry around ADAS (Advanced Driver Assistance Systems), most of the major model systems for the OEMs and Tier 1 ADAS manufacturers are built for a daily recalibration.  To be more precise, they have created an SLA and environment where they can re-train the entire simulation of a model in a single day…. This is an example of an industry where regulatory governance (of safety) is currently limited (but coming). By anticipating the need for fast turn-around, to quickly address any sudden safety issues, their AI governance is ready for any future changes any government may mandate. The ability to get products to market sooner is a bonus. This is what a mature AI governance looks like.

Third and probably the most under thought of is the investment path into AI.  Regularly I am approached by technology managers struggling to insource their AI endeavor.  All to often these companies rushed onto the AI bandwagon without fully mapping the path. These companies all too often relied on a 3rd party’s fully embedded AI technology to bootstrap their effort.  This accelerated their first steps into the AI environment and bought them speed in the first bullet above.  Unfortunately, this often comes at the lack of any intellectual property (IP).  At the end of the day, all the IP is owned by the 3rd party – leaving  the company’s product differentiation controlled someone else’s technology.  This is often strategically challenging and often leads to issues around cost or around how the company wants to inevitably mature the technology into its own offering.  They cannot as they do not own the IP.  This is the difference between a platform as a service (PAAS) model where you consume the infrastructure and own the technology and a software as a service (SAAS) model where you only own specific models built on someone else’s technology.  It is important to understand the difference and the impact to your business.

Are you ready to be enlightened?

Dell Technologies can help.  We have experience scaling and we have partners who can help operate these on your behalf in a cost friendly PAAS model.  We can help you own your intellectual property at your desired cost model.

Thursday 24 January 2019

Accelerating AI and Deep Learning with Dell EMC Isilon and NVIDIA GPUs

Over the last few years, Dell EMC and NVIDIA have established a strong partnership to help organizations accelerate their AI initiatives. For organizations that prefer to build their own solution, we offer Dell EMC’s ultra-dense PowerEdge C-series, with NVIDIA’s TESLA V100 Tensor Core GPUs, which allows scale-out AI solutions from four up to hundreds of GPUs per cluster. For customers looking to leverage a pre-validated hardware and software stack for their Deep Learning initiatives, we offer Dell EMC Ready Solutions for AI: Deep Learning with NVIDIA, which also feature Dell EMC Isilon All-Flash storage.  Our partnership is built on the philosophy of offering flexibility and informed choice across a broad portfolio.

To give organizations even more flexibility in how they deploy AI with breakthrough performance for large-scale deep learning Dell EMC and NVIDIA have recently collaborated on a new reference architecture that combines the Dell EMC Isilon All-Flash scale-out NAS storage with NVIDIA DGX-1 servers for AI and deep learning (DL) workloads.

To validate the new reference architecture, we ran multiple industry-standard image classification benchmarks using 22 TB datasets to simulate real-world training and inference workloads. This testing was done on systems ranging from one DGX-1 server, all the way to nine DGX-1 servers (72 Tesla V100 GPUs) connected to eight Isilon F800 nodes.

This blog post summarizes the DL workflow, the training pipeline, the benchmark methodology, and finally the results of the benchmarks.

Key components of the reference architecture shown in figure 1 include:

◈ Dell EMC Isilon All-Flash scale-out NAS storage delivers the scale (up to 33 PB), performance (up to 540 GB/s), and concurrency (up to millions of connections) to eliminate the storage I/O bottleneck keeping the most data hungry compute layers fed to accelerate AI workloads at scale.
◈ NVIDIA DGX-1 servers which integrate up to eight NVIDIA Tesla V100 Tensor Core GPUs fully interconnected in a hybrid cube-mesh topology. Each DGX-1 server can deliver 1 petaFLOPS of AI performance, and is powered by the DGX software stack which includes NVIDIA-optimized versions of the most popular deep learning frameworks, for maximized training performance.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 1: Reference Architecture

Deep Learning Workflow


As visualized in figure 2, DL usually consist of two distinct workflows, model development and inference.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 2: Common DL Workflows: Model development and inference

The workflow steps are defined and detailed below.

1. Ingest Labeled Data – In this step, the labeled data (e.g. images and their labels which indicate whether the image contains a dog, cat, or horse.) are ingested into the Isilon storage system. Data can be ingested via NFS, SMB, and HDFS protocols.

2. Transform – Transformation includes all operations that are applied to the labeled data before they are passed to the DL algorithm. It is sometimes referred to as preprocessing. For images, this often includes file parsing, JPEG decoding, cropping, resizing, rotation, and color adjustments. Transformations can be performed on the entire dataset ahead of time, storing the transformed data on 
3. Isilon storage. Many transformations can also be applied in a training pipeline, avoiding the need to store the intermediate data.

Train Model – In this phase, the model parameters are learned from the labeled data stored on Isilon. This is done through a training pipeline shown in 3 consisting of the following:

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 3: Training pipeline

◈ Preprocessing – The preprocessing pipeline uses the DGX-1 server CPUs to read each image from Isilon storage, decode the JPEG, crop and scale the image, and finally transfer the image to the GPU. Multiple steps on multiple images are executed concurrently. JPEG decoding is generally the most CPU-intensive step and can become a bottleneck in certain cases.

◈ Forward and Backward Pass – Each image is sent through the model. In the case of image classification, there are several prebuilt structures of neural networks that have been proven to work well. To provide an example, Figure 3 below shows the high-level workflow of the Inception-v3 model which contains nearly 25 million parameters that must be learned. In this diagram, images enter from the left and the probability of each class comes out on the right. The forward pass evaluates the loss function (left to right) and the backward pass calculates the gradient (right to left). Each image contains 150,528 values (224*224*3) and the model performs hundreds of matrix calculations on millions of values. The NVIDIA Tesla GPU performs these matrix calculations quickly and efficiently.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 4: Inception v3 model architecture

◈ Optimization – All GPUs across all nodes exchange and combine their gradients through the network using the All Reduce algorithm. The communication is accelerated using NCCL and NVLink, allowing the GPUs to communicate through the Ethernet network, bypassing the CPU and PCIe buses. Finally, the model parameters are updated using the gradient descent optimization algorithm.
◈ Repeat until the desired accuracy (or another metric) is achieved. This may take hours, days, or even weeks. If the dataset is too large to cache, it will generate a sustained storage load for this duration.

4. Validate Model – Once the model training phase completes with a satisfactory accuracy, you’ll want to measure the accuracy of it on validation data stored on Isilon – data that the model training process has not seen. This is done by using the trained model to make inferences from the validation data and comparing the result with the label. This is often referred to as inference but keep in mind that this is a distinct step from production inference.

5. Production Inference – The trained and validated model is then often deployed to a system that can perform real-time inference. It will accept as input a single image and output the predicted class (dog, cat, horse). Note that the Isilon storage and DGX-1 server architecture is not intended for and nor was it benchmarked for production inference.

Benchmark Methodology Summary

In order to measure the performance of the solution, various benchmarks from the TensorFlow Benchmarks repository were carefully executed. This suite of benchmarks performs training of an image classification convolutional neural network (CNN) on labeled images. Essentially, the system learns whether an image contains a cat, dog, car, train, etc.

The well-known ILSVRC2012 image dataset (often referred to as ImageNet) was used. This dataset contains 1,281,167 training images in 144.8 GB[1]. All images are grouped into 1000 categories or classes. This dataset is commonly used by deep learning researchers for benchmarking and comparison studies.

When running the benchmarks on the 148 GB dataset, it was found that the storage I/O throughput gradually decreased and became virtually zero after a few minutes. This indicated that the entire dataset was cached in the Linux buffer cache on each DGX-1 server. Of course, this is not surprising since each DGX-1 server has 512 GB of RAM and this workload did not significantly use RAM for other purposes. As real datasets are often significantly larger than this, we wanted to determine the performance with datasets that are not only larger than the DGX-1 server RAM, but larger than the 2 TB of coherent shared cache available across the 8-node Isilon cluster. To accomplish this, we simply made 150 exact copies of each image archive file, creating a 22.2 TB dataset.

Benchmark Results

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 5: Image classification training with original 113 KB images

There are a few conclusions that we can make from the benchmarks represented above.

◈ Image throughput and therefore storage throughput scale linearly from 8 to 72 GPUs.
◈ The maximum throughput that was pulled from Isilon occurred with ResNet50 and 72 GPUs. The total storage throughput was 5907 MB/sec.
◈ For all tests shown above, each GPU had 97% utilization or higher. This indicates that the GPU was the bottleneck.
◈ The maximum CPU utilization on the DGX-1 server was 46%. This occurred with ResNet50.

Large Image Training


The benchmarks in the previous section used the original JPEG images from the ImageNet dataset, with an average size of 115 KB. Today it is common to perform DL on larger images. For this section, a new set of image archive files are generated by resizing all images to three times their original height and width. Each image is encoded as a JPEG with a quality of 100 to further increase the number of bytes. Finally, we make 13 copies of each image archive file. This results in a new dataset that is 22.5 TB and has an average image size of 1.3 MB.

Because we are using larger images with the best JPEG quality, we want to match it with the most sophisticated model in the TensorFlow Benchmark suite, which is Inception-v4.

Note that regardless of the image height and width, all images must be cropped and/or scaled to be exactly 299 by 299 pixels to be used by Inception-v4. Thus, larger images place a larger load on the preprocessing pipeline (storage, network, CPU) but not on the GPU.

The benchmark results in Figure 5 were obtained with eight Isilon F800 nodes in the cluster.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Live

Figure 6: Image classification training with large 1.3 MB images

As before, we have linear scaling from 8 to 72 GPUs. The storage throughput with 72 GPUs was 19,895 MB/sec. GPU utilization was at 98% and CPU utilization was at 84%.

Wednesday 23 January 2019

FPGAs vs. GPUs: A Tale of Two Accelerators

This need for speed has led to a growing debate on the best accelerators for use in AI applications. In many cases, this debate comes down to a question of server FPGAs vs. GPUs — or field programmable gate arrays vs. graphics processing units.

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Guides

To see signs of this lively debate, you need to look no further than the headlines in the tech industry. A few examples that pop up in searches:

◈ “Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Learning?”
◈ “FPGA vs GPU for Machine Learning Applications: Which One Is Better?”
◈ “FPGAs Challenge GPUs as a Platform for Deep Learning”

So what is this lively debate all about? Let’s start at the beginning. Physically, FPGAs and GPUs often plug into a server PCIe slot. Some, like the NVIDIA® Volta Tesla V100 SXM2, are mounted onto the server motherboard. Note that GPUs and FPGAs do not function on their own without a server, and neither FPGAs nor GPUs replace a server’s CPU(s). They are accelerators, adding a boost to the CPU server engine. At the same time, CPUs continue to get more powerful and capable, with integrated graphics processing. So start the engines and the race is on between servers that have been chipped, turbo and supercharged.

FPGAs can be programmed after manufacturing, even after the hardware is already in the field — which is where the “field programmable” comes from in the field programmable gate array (FPGA) name. FPGAs are often deployed alongside general-purpose CPUs to accelerate throughput for targeted functions in compute- and data-intensive workloads. They allow developers to offload repetitive processing functions in workloads to rev up application performance.


GPUs are designed for the types of computations used to render lightning-fast graphics — which is where the “graphics” comes from in the graphics processing unit (GPU) name. The Mythbusters demo of GPU versus CPU is still one of my favorites and it’s fun that the drive for video game screen-to-controller responsiveness impacted the entire IT industry, as accelerators have been adopted for a wide range of other applications ranging from AutoCAD and virtual reality to crypto-currency mining and scientific visualization.

FPGA and GPU makers continuously compare against CPUs, sometimes making it sound like they can take the place of CPUs. The turbo kit still cannot replace the engine of the car — at least not yet. However, they want to make the case that the boost makes all the difference. They want to prove that the acceleration is really cool. And it is, depending on how fast you want or need your applications to go. And just like with cars, it comes at a price. After the acquisition cost, the price includes the amount of heat generated (accelerators run hotter), fuel required (they need more power), and sometimes applications aren’t programmed to take full advantage of the available acceleration (GPU applications catalog).

So which is better for AI workloads like deep learning inferencing? The answer is: It depends on the use case and the benefits you are targeting. The ample commentary on the topic finds cases where FPGAs have a clear edge and cases where GPUs are the best route forward.

Dell EMC distinguished engineer Bhavesh Patel addresses some of these questions in a tech note exploring reasons to use FPGAs alongside CPUs in the inferencing systems used in deep learning applications. A bit of background: When a deep learning neural network has been trained to know what to look for in datasets, the inferencing system can make predictions based on new data. Inferencing is all around us in the online world. For example, inferencing is used in recommendation engines — you choose one product and the system suggests others that you’re likely to be interested in.

In his tech note, Bhavesh explains that FPGAs offer some distinct advantages when it comes to inferencing systems. These advantages include flexibility, latency and power efficiency. Let’s look at some of the points Bhavesh makes:

Flexibility for fine tuning


FPGAs provide flexibility for AI system architects looking for competitive deep learning accelerators that also support customization. The ability to tune the underlying hardware architecture and use software-defined processing allows FPGA-based platforms to deploy state-of-the-art deep learning innovations as they emerge.

Low latency for mission-critical applications


FPGAs offer unique advantages for mission-critical applications that require very low-latency, such as autonomous vehicles and manufacturing operations. The data flow pattern in these applications may be in streaming form, requiring pipelined-oriented processing. FPGAs are excellent for these kinds of use cases, given their support for fine-grained, bit-level operations in comparison to GPUs and CPUs.

Power savings


Power efficiency can be another key advantage of FPGAs in inferencing systems. Bhavesh notes that since the logic in FPGAs has been tailored for specific applications and workloads, the logic is extremely efficient at executing the application. This can lead to lower power usage and increased performance per watt. By comparison, CPUs may need to execute thousands of instructions to perform the same function that an FPGA maybe able to implement in just a few cycles.

All of this, of course, is part of a much larger discussion on the relative merits of FPGAs and GPUs in deep learning applications — just like with turbo kits vs. superchargers. For now, let’s keep this point in mind: When you hear someone say that deep learning applications require accelerators, it’s important to take a closer look at the use case(s). I like to think about it as if I’m chipping, turbo or super-charging my truck. Is it worth it for a 10-minute commute without a good stretch of highway? Would I have to use premium fuel or get a hood scoop? Might be worth it to win the competitive race, or for that muscle car sound.

Sunday 20 January 2019

Customers Say OEM Partnerships Are Driving Huge Value in the Digital Economy

Children constantly ask “why?” I know that it can be tough for a parent to always provide an adequate answer of the why behind what we are asking our child to do, but honestly, knowing “why” helps children make sense of the world.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certification, Dell EMC Live

A clear understanding of “why” is as important to your child’s development as it is to keeping clarity in one’s business decisions and strategy. Why did we launch that product or service again? I know what it does, but why did this new solution need to be offered? Did we ask enough of the “why” questions before moving forward with our business plan?

Still asking “why?”


As a father of three and as a kid at heart, I continue to ask “why,” As a marketer, the big question that has piqued my curiosity over the last year was this: Why is the OEM market continuing to grow so fast, particularly over the last few years, and why is Dell EMC OEM seeing such rapid growth in revenues? As much as I would like to say – great marketing, and as much as Sales would love to say that it is all due to an amazing sales force, we know there are more reasons behind the increased market demand for OEM relationships. By OEM, I’m talking about companies that purchase third-party technology to embed or integrate into a solution that they are building to market and resell to their customer.

Of course, as Dell EMC OEM, we are delighted to be serving a market that has seen rapid demand for more OEM partnerships, but when we analyzed external contributory factors, like the improving global economy, our acquisition of EMC, and the formation of Dell Technologies, we knew that as important as these were to offering more to our customers, there had to be other growth drivers. Why are OEM partnerships on the rise? Why now versus the last two decades?

Curiosity is the mother of all knowledge


Whenever you have questions related to your business challenges or successes, always ask the customer – “why?”

Customer feedback was actually the impetus behind our OEM business. And so, when we have questions, we naturally turn to our customers for answers. With this in mind, we commissioned Futurum Research to conduct The OEM Partnership Survey. This captures the voices of more than 1,000 senior decision makers in OEM-type business models across the globe, examining the ability of OEM and third-party partnerships to drive innovation, improve time to market, and increase competitive value.

Published today, the survey report makes for compelling reading. It validates and reflects many of our own experiences in the OEM marketplace, while also providing fresh insights. It’s a real treasure trove of interesting data. Today, I want to share just a few of the key takeaways that jumped off the page for me, as well as some customer use cases to help bring the data to life.

Partnership speeds innovation and delivers increased revenue


Most respondents cited OEM partnerships as being very or critically important in achieving key business objectives, like increased revenue and improved customer experience. For example, more than 88 percent of respondents say that existing OEM partnerships are helping them overcome barriers to innovation, with close to 83 percent indicating that OEMs had helped them accelerate their own product and services initiatives. As a result, two-thirds of the panel stated that they had been able to translate ideas into market offerings with their OEM Partnerships.

Time to market matters more than ever


Let’s look at a great customer case in point. Bionivid, a genome IT company based in India, says it reduced its development costs by at least 50 percent by collaborating with Dell EMC OEM. By avoiding the expense of building hardware platforms, Bionivid was able to seize the right opportunity at the right time and gain an advantage over its competitors.

Likewise, Tracewell Systems, an Ohio, USA-based provider of standard and custom electronic hardware systems for the military aerospace, automatic test equipment (ATE) and custom-of-the-shelf (COTS) markets realized its business goals by partnering with Dell EMC OEM. The integration, manufacturing, and global supply chain capabilities that came with the partnership allowed Tracewell to scale rapidly and get its products to market faster.

Finding the right technology is critical


Survey participants also believe that finding the right technology can make all the difference between winning and losing, with 81 percent saying that OEMs are helping them embrace emerging technology such as artificial intelligence (AI), multi-cloud, and the Internet of Things (IoT).

Digital transformation case study


Let me share one customer story to help illustrate the power of partnership, particularly in the adoption of new technologies. Olivetti – an Italian brand established in 1908 and now Telecom Italia Group’s IoT specialist – provides small-and medium-sized manufacturers with an IoT-based plug-and-play solution to make their machines and plants smarter, and their operations more efficient and effective. To achieve this, Olivetti is working with Dell EMC OEM Solutions and Alleantia, a Dell EMC IoT Partner and Intel IoT Alliance member. The three companies have collaborated to develop a turnkey solution that enables the digital transformation of production processes into an Industry 4.0 implementation.

Time waits for no one


Based on the survey’s findings, I would argue that time has become a more valuable commodity than the technology itself. Don’t get me wrong, technological innovation is arguably the biggest driver of human progress and advancement. However, most people – however smart and capable – no longer have the time to build bespoke, specialist technology. There’s so much technology out there, that it’s impossible to be an expert in everything. To succeed, you must collaborate where it makes the most sense to enable your business model.

Businesses are feeling increasing pressure to drive new innovations into their operations or offerings, as more and more companies across every industry are becoming increasingly dependent on technology to bring their ideas to market. With time and expertise in short supply, forging or deepening the right technology partnerships is a business imperative. Why spend valuable time and resources developing technology that’s already available, when an OEM partner can help bring your solution to market faster and more efficiently?

OEM market growth will continue to accelerate


It follows that as more and more companies are becoming time-poor but technology-enabled, there will be a corresponding increase in the need to build partnerships. As a result, the world of ecosystems and OEM relationships looks set to dramatically expand.

In fact, one of the most exciting predictions to come out of the report is that OEM partnerships have the potential to grow at a compound annual growth rate (CAGR) of 20 to 25 percent over the next 10 years. Over 75 percent of the survey panel say they expect to increase their use of OEM partnerships over the coming 12 to 18 months, with over 25 percent anticipating that their use of OEM partnerships will increase dramatically.

The expert vs the renaissance person


Here’s a question for you to reflect upon: are you an expert or an adapter? While society has traditionally valued expertise, I believe that being an expert is no longer the main prize. Instead, the ability to adapt to change is what matters most in today’s dynamic world. In many ways, companies are going through the same challenges now that today’s young people will face in the future. Despite specializing in an area of study, our kids will likely have to switch careers, maybe several times during their lifetime. Companies need to develop that same agility, and partnerships are a way to get them there.

Winning or losing – you decide!


For me, the key takeaway is this: to survive and thrive in today’s world, you need to be well-rounded and versatile – somewhat of a renaissance person. However, there’s a caveat – renaissance person or not, you still need to use the tools of the future to stay ahead of the competition.

The survey shows that forging the right partnerships will make all the difference between winning and losing. One prediction is that companies that engage in above-average levels of OEM partnership have the opportunity to accelerate sales growth and cost reductions by 35 and 45 percent by 2025.

On the flip side, the survey also predicts that by 2025, up to 50 percent of current business will cease to exist in its current competitive state, driven by new technology and customer evolution. Here’s the burning question – where will you be in seven years’ time? Most importantly, “why?”

Friday 18 January 2019

How Surveillance Is Transforming These Top Six Industries

Changing trends and surveillance technologies are creating powerful new solutions across safety, security, and day-to-day operations for these six leading industries.

Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Surveillance is rapidly changing across the world, and the technology supporting it is getting pretty complex fast. Gone are the days of analog cameras and single-person control rooms. Today, effective surveillance spans an interconnected, intelligent ecosystem of high-definition imaging, multi-modal sensors, data-sharing networks, and powerful analytics—a combination resulting in insights derived from digital images and video, otherwise known as “computer vision.”

Industries from just about every vertical are leveraging advanced surveillance technologies to protect employee well-being, safeguard communities, and improve overall processes and services, but perhaps none more than these six key industries where surveillance solutions are achieving some of the most impressive results around the world.

Education


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Just ten years ago high schools were one of the primary users of surveillance cameras. Today, however, we see nearly every division of education integrate and adopt new surveillance technologies in order to keep students, faculty, and employees safe—whether that’s from vandalism, theft, or a potential active-shooter situation.

On college campuses, surveillance is more than just a tool for safety. It’s become a powerful recruiting device and persuader for students and parents who are increasingly conscientious of campus safety. In fact, popular sites such as US News & World Report include campus safety as part of their college rankings, referencing safety data compiled by the U.S. Department of Education.

State and Local Government (SLG)


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

When it comes to government, surveillance is largely about safe communities. From small-scale town-wide initiatives to major country-wide overhauls, state and local CIOs are leveraging quicker, smarter, and more secure surveillance infrastructures in order to keep their communities feeling safe and to meet the rising demand for more efficient interaction and information transfer. According to a recent IDC report, intelligent transportation and data-driven public safety leveraging video surveillance and street lighting represent a quarter of spending by smart cities this year.

For many state and local governments the top priority is modernizing mission-critical legacy systems to support integration with newer, more secure infrastructures, and government leaders are seeing the successful impacts right away. For example, 78 percent of those who have deployed cloud-enabled solutions say they have lowered their asset-investment threshold and improved their ability to innovate. That includes decreasing response times to criminal activity and emergencies, deterring criminal and gang activity, providing digital evidence and documentation, and improving safety on roads and sidewalks.

Effective SLG surveillance includes counter-terrorism, and when it comes down to it, it’s about creating an environment where the community as a whole can feel safer, knowing police and other emergency responders are equipped with the best tools to react and respond quickly.

Transportation


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Branching out from state and local government, the transportation industries that create and connect these communities share several of the same problems. Mass transit—including trains, subways, buses, planes, etc. all have crime. Theft, assault, vandalism, and terrorism are most effectively prevented and stopped with intelligent end-to-end engineered surveillance systems that not only document current crimes but deter them in the future.

When it comes to airport security, the first thought that tends to come to mind is Customs and Border Protection, but it’s about so much more—including safety and crime prevention at TSA checkpoints, baggage claim areas, tarmacs, and terminals.

Transit systems are also incorporating computer vision to improve surveillance for traffic incident management, first-responder alerts, analyzing behavior of travelers, and helping to eliminate overcrowding during peak travel hours. For example, busy subway systems can leverage people counting to alert engineers when trains have reached capacity.

Healthcare


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Healthcare as a leading surveillance market may come as a surprise to some, but a remarkable determining factor that consistently comes out of industry surveys as to whether healthcare employees leave or stay in their positions is how safe they feel in the workplace. This is critical in such an extremely competitive industry where one of the largest challenges is attracting and retaining highly qualified employees—employees who need to feel safe walking around the hospital, being with patients, and walking to and from parking lots often in the middle of the night. Computer vision is also being deployed in healthcare facilities to help identify workplace-comp fraud and undue claims as well as to help prevent theft of prescription drugs.

An up-and-coming surveillance use case in healthcare is remote patient monitoring and digital patient sitters. Video surveillance and computer vision help provide round-the-clock virtual care to those that need it most while helping to minimize overcrowding in hospitals, giving patients the freedom to be at home versus a hospital bed. The right visibility of those patients in turn helps caregivers administer the best care possible.

Casinos and Entertainment


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Casinos are unique in that they differ from most other industries in one major way: strict surveillance requirements are legally required to be fulfilled before business doors can even open. Because so much surveillance technology becomes a business line item, budgets are set aside. That being said, it becomes the utmost importance for casinos to invest in the right technology that’s reliable and consistently operating for business continuity.

For example, since each gambling table is required to have three cameras covering it at all times, losing one camera can force a table to shut down. A lost table means lost profits. Now consider if a larger percentage of cameras fail, then the entire casino floor or even beyond would be required to shut down. Issues with surveillance hardware or software can translate to substantial losses for a casino business.

Retail


Dell EMC Study Material, Dell EMC Guides, Dell EMC Certification, Dell EMC Tutorial and Material

Many modern retailers are using video surveillance in some fairly straightforward use cases like traditional loss prevention, but some are leveraging edge-to-core-to-cloud architecture and hybrid strategies to stand out from their competitors in a way that is anything but traditional.

Classic loss prevention is a cornerstone of all retailers—looking for stealing either internally or externally and reducing the amount of loss that occurs so that every dollar saved goes towards the bottom line. Retailers are now able to use computer vision not just to identify losses that are actively occurring, but to predict complex patterns like customer insights. For instance, what kind of display will engage a customer the most, or for warehouse-style retailers, what are the risks associated with stacked items that may potentially collapse and cause injury and a lawsuit?

End-to-End Surveillance from Camera to Core to Cloud


A recurring pattern across industries comes down to the difficulty in deciding what kind of technology stack and surveillance solution is appropriate for a particular organization and how to navigate the complexities of testing, validating and deploying an integrated system. Ideally, the solution needs to be flexible and scalable enough to solve today’s problems while effectively preparing for problems that may arise tomorrow—whether that’s terrorism, vandalism, theft, or a potential active-shooter situation.  And on the flip side, what opportunities can be had from this new age of computer vision, whether it’s automated traffic alerts, virtual sitters, or customer-retention programs?

Wednesday 16 January 2019

The New Reality for Retailers: Science Fiction No More

How Dell Technologies is empowering retail surveillance with best-in-class AI and computer vision to capture brand new customer opportunities and maximize value.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Guides, Dell EMC Live

I want to ask you what may seem like a simple question.

Why is surveillance so important for retail?

Most people would immediately assume preventing theft and saving money. And up until now, these have precisely been the major driving forces behind the development and adoption of security and surveillance technologies. According to the “2018 National Retail Security Survey,” inventory shrinkage accounted for an average loss of 1.33% of retail sales last year. That’s a total loss of $46.8 billion. The financial impact of theft is considerable, but here’s some very good news: that loss is steadily declining.

With the help of leading security and data experts like Dell Technologies, improving strategies and technologies like computer vision has helped retailers lower the average dollar loss to about half of what it was in 2016—a massive decline in lost revenue and a steady improvement that has continued for the past three years.

This brings me back to my original question about the relationship between surveillance and retail. What if I said that these new innovations were changing the way we view and use surveillance technology—pushing it into a new era where the benefits (and profits) go much further than simple security and where ideas from science fiction are becoming a new reality for both consumers and retailers?

When it comes to discovering new ways to leverage AI and IoT with surveillance, there have been many incredible advancements lately. In many cases, faster and smarter algorithms, cameras, and sensors are being revealed almost every week, and at Dell Technologies we’re spearheading several of these exciting new avenues, not just helping our enterprise partners go above and beyond their retail security needs, but also helping them and their customers achieve a new reality of personalized shopping.

Computer Vision Is Reshaping How Retailers Do Business


These developments in computer vision don’t just save money and enable real-time potential-theft evaluation. They also give retailers a brand-new opportunity to better understand their customers. New implementations of hardware and software provide our partners with the tools necessary to learn how to better engage their customers and improve their shopping experiences, resulting in longer times spent shopping and more dollars paid into the store.

For example, understanding how customers flow through the store and at what times of the day can allow the retailer to put more important items directly in their paths where the products can be more visible. Understanding flow can also help reveal why certain items are being skipped over or potentially picked up and put back. Or perhaps there are specific repeat customers or even areas of the store that result in more revenue being generated, where staff can be strategically positioned to assist customers. These few implementations are just the tip of the iceberg when it comes to how computer vision is helping to obtain personalized customer insights and knowledge of real-time and previous behavior that will improve retail profits. Due to the popularity of online shopping, it can be weeks or months before a customer comes back through the doors, so capitalizing on these in-store interactions is more important than ever.

Computer vision is changing everything. The purpose of surveillance up until now was focused on saving money by deterring theft, but these breakthrough developments are revealing how going forward, the new focus will instead be on revenue generation.

Next Steps to Adopting a Computer Vision Solution


Deciding to bring on an advanced surveillance solution, however, can be daunting and often riddled with complexities. That is precisely why Dell Technologies has taken the necessary steps to simplify and streamline the process with an open, holistic, and integrated platform for both surveillance and computer vision based retail insights. Our IoT Solution for Surveillance, including computer vision and AI, is specifically designed for ease of use and especially consistent quality and performance.

Considerations for the Supporting Infrastructure


From an architectural standpoint, we understand how most retailers want to make real-time decisions at the store level. Cloud infrastructure and years of metadata are a strong and highly useful foundation for deeper analysis when needed, but when it comes to practical real-time decision making and per-store or even per-customer customization, that cloud-to-edge connection may not be the best option, and it can even handicap an architecture that requires that constant connection during an event such as a hurricane. Consider, for example, those businesses that are critical in helping their communities when disaster strikes—like some of the large home improvement stores that require access to their digital systems in order to adequately respond to urgent customer needs in any situation. The better option for these types of retailers is a resilient, on-premises infrastructure. For this reason, Dell Technologies has worked with several partners to develop a robust, integrated, and scalable in-store appliance optimized for visualized streaming data with right-sized storage for analytics.

See It Live at NRF 2019


Surveillance continues to save retail more and more money when it comes to preventing theft, but our computer vision, AI capabilities, and best-in-class computer technologies are changing the way retailers are going about saving money and building revenue. With new capabilities and quickly evolving AI, Dell Technologies is able to provide retailers with more personalized insights and tools that don’t just make more profits but bridge the gap between the store and the customer.

While you can already experience several of these living examples of our vision and computing in retail stores throughout the country, come experience a deeper look at Dell Technologies’ computer vision in action live at NRF (VMware booth, #1057) on January 13-15.

Sunday 13 January 2019

Dell EMC Cyber Recovery Solutions at VMware VMUG Virtual Event

Dell EMC Cyber Recovery Solutions at VMware VMUG Virtual Event (December 2018)


The number of successful Cyber Attacks are growing, and they are also evolving with more elaborate and innovative methods being used. As fast as organizations build defenses against attacks, hackers adapt and come up with new ways to work around them. Most organizations affected by these attacks have strong detection capabilities in place. The important question is however, could your organization recover if an attacker gets through the detection perimeter and encrypts or wipes your mission critical data? Organizations need to consider recovery as a vital part of their overall cyber-security and risk management strategy in order to truly become resilient in today’s cyber threat landscape.

Data Integrity and Cyber Recovery: Dell EMC and VMware


In our latest webinar, Dell EMC and VMware highlight how the latest security innovations from VMware vSphere and Dell EMC Cyber Recovery, leveraging Dell Technologies, it’s partners, and services can augment your overall cyber-security posture and provide a way to recover from a destructive cyber-attack.

Dell EMC Study Material, Dell EMC Guides, Dell EMC Tutorial and Materials

Mike Foley, a Staff Technical Marketing Architect at VMware, described in great detail the latest security innovations which are a part of vSphere 6.7 Update 1.  VMware is progressively hardening setting by default, ensuring work environments are protected and secure from Day 1.  VMware is also able to transform endpoint detection and response with the ability to run AppDefense within the ESXi Hypervisor, allowing your company to respond to changing situations and concerns quickly with a secure infrastructure through visibility and control.  By focusing on a new model of security, VMware is able to digest a simple and smaller problem set for a better sign-to-noise ratio, providing VMware AppDefense users with actionable and behavior-based alerts and responses to immediate cyber-attacks. Here is the Breakout Session.


Dell EMC Study Material, Dell EMC Guides, Dell EMC Tutorial and Materials

Alex Almeida, a Consultant Product Marketing Manager at Dell EMC, guided us through a comprehensive presentation on how Dell EMC allows companies to recover their VMware environments following a destructive cyber-attack.  Today, 92% of organizations can’t detect cyber-attacks quickly enough, and 59% of organizations believe that isolating affected systems and recovering from backups should be the response to ransomware. Dell EMC Cyber Recovery is a comprehensive solution providing that vault and the newly announced Cyber Recovery 18.1 software brings end-to-end management of thissolution. Itruns entirely from within your data vault, giving your company the highest probability of clean data that is not infected or attacked to be secured for recovery.  With Cyber Recovery 18.1, customers can benefit from end-to-end automated workflow, a modern & simple UI/UX, flexible REST API, and vault analytics with CyberSense workflow.  Dell EMC also offers Dell EMC Services for Cyber Recovery solutions, providing organizations with deployment assistance, workshops, and advisory services to determine what solution and architecture is best for your company to be protected. Here is the Breakout Session.


So why Cyber Recovery from Dell EMC Data Protection?


Ensuring your business-critical data can withstand a cyber attack designed to destroy your data including backups and replicas. Here are the five steps to building a last line of defense.

1. Solutions Planning – Selection of application candidates, recovery time, and recovery point objectives.

2. Isolation & Governance – An isolated data center environment that is disconnected from the network and restricted from users other than those with proper clearance.

3. Automated Data Copy and Air Gap – Software to create WORM-locked data copies to a secondary set of arrays and backup targets as well as processes to create an operational air gap between the production environment and the isolated recovery zone.

4. Integrity Checking & Alerting – Workflows to stage replicated data in the isolated recovery zone and perform integrity checks to analyze whether it is impacted by malware along with mechanisms to trigger alerts on suspicious executables and data.

5. Recovery & Remediation – Procedures to perform recovery / remediation after an incident using dynamic restore processes and your existing DR procedures.

Dell EMC Cyber Recovery, paired with VMware vSphere security, provides a very effective security and recovery solution against common attack vectors, including dormant malware, data wiping and locking, data corruption, insider attacks, and destruction of backup and storage assets. Simply put, it gives organizations an effective way to recover the lifeline of their business when other strategies fail.

Friday 11 January 2019

Server Automation – Your Key to a Faster, Secure, & More Efficient Data Center

There’s a lot of truth to the old saying, “you don’t know what you’ve got ‘til it’s gone.”

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Materials

Then again, the opposite is often true as well. Sometimes you have no idea what you’re missing until you have it – and then you can’t imagine how you ever lived without it!

There are countless examples of automated products, services, and technology that came along have changed our lives for the better – in ways we could never have imagined before.

Take banking, for example. It wasn’t all that long ago that we had to drive to the bank and interact with an actual human to deposit a check or get cash. And if it was Sunday? Too bad, closed. Then, miraculously these fancy gadgets called Automated Teller Machines came along and made life so much easier! Suddenly we could do basic transactions from all over the place at any time. And now with mobile deposit?! Whole new world. We can snap a pic on our phone and deposit in less than 30 seconds.

It’s easy to see how automation in banking saves us time and adds tremendous value, but the benefits of automation extend across almost all industries: Medical (hello auto prescription refills!), shipping (what did we ever do without Amazon Prime?) car transmissions (how many people under 30 know how to drive a stick shift?). And of course, another key place where automation brings tremendous benefits: Your data center.

Server Automation in the Data center


Server automation is critical to helping things run faster, saving employee time and reducing human error. It’s also key to modernizing approximately 50% of today’s applications that cannot be moved to the public cloud.

Despite its tremendous potential benefits, many IT departments have yet to embrace server automation. Perhaps, like those pre-ATM days, they simply don’t realize how much easier and more efficient their work can be. Or maybe the idea of implementing new systems seems too daunting. Or perhaps they’re afraid automation threatens their job? We get it, change is hard. But if for whatever reason your organization has been putting of automating your data center, it may be time to re-think your position.

After all, server automation:

◈ Saves time (faster deployment)
◈ Reduces human error
◈ Decreases downtime
◈ Frees up employees to work on other tasks that add value to the organization.
◈ Helps with AI implementation (71% of organizations say inefficiencies due to lack of server automation are a challenge to their AI strategies)


How Does Dell EMC Help with Server Automation?


Dell EMC PowerEdge servers were the first servers to offer “embedded management automation.” All PowerEdge platforms, including rack, tower and modular servers, can be managed by the same management console, OpenManage Enterprise. Which means manageability is simple and consistent across all PowerEdge servers.

All PowerEdge servers also include iDRAC (Integrated Dell Remote Access Controller). iDRAC is the “brains” behind many automation features; from deployment, to updates, to monitoring, to maintenance and remediation.

The Dell EMC policy-driven management systems, OpenManage Enterprise and iDRAC, can automate server management tasks and ultimately enable customers to free up IT resources and increase system uptime.

Additionally, users can manage both their virtual and physical IT environment by utilizing OpenManage integrations within third-party management consoles such as VMware vCenter.

Check out how the IT Staff at CERN Get More Sleep with OpenManage Enterprise

Don’t Wait, Automate.


If you’re like 62% of other enterprises, you are dissatisfied with the quality, speed and cost of your application releases.i You may also be frustrated with the agility of your architecture to support both mission-critical workloads as well as data-intensive workloads. If that’s the case, you are certainly not alone. Fortunately, there are steps you can take to improve and overcome these common challenges.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Materials

No matter what stage you’re in with your automation, there are ways you can improve performance by automating specific tasks. Sure, you’ll need to commit to the process and make sure you have the right hardware and systems in place. But once you’re rocking along, you’ll realize the tremendous benefits of switching from manual to automated processes (and probably wonder why you didn’t automate sooner!) You’ll be able to react quickly and support data-intense workloads. Your employees will have time to focus on other, more important tasks that add value to your business.

For a more complete overview on server automation and how Dell EMC can help your organization reach its potential download the EMA white paper, Automate IT Infrastructure for Speed, Security, and Efficiency.

Wednesday 9 January 2019

Delivering Quantifiable Customer Value with Dell EMC VMware-Based HCI

Deploying Dell EMC VMware-based HCI provides a cost-effective and high-performing infrastructure foundation on which to run important business applications across distributed business environments. According to a recent IDC study, partners can help customers realize significant quantifiable value from these investments.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certification, Dell EMC Learning

Many of the IT processes put in place to support business operations are challenged to meet the demands of a rapidly changing digital world. Inefficiencies not only impede your customers’ business growth, they also create barriers to the very innovation needed to compete in an environment of shifting business models. Providing customers with highly automated, software-defined infrastructure helps eliminate data center silos and support IT agility. Hyperconverged infrastructure (HCI) enables your customers to transform and scale operations rapidly and efficiently by consolidating compute, network, and storage in a single software-defined solution.

How Can Partners Help Customers Realize Business Results with HCI?


The operational efficiency advantages of automated, software-defined infrastructure are well understood, but what’s the real business value of HCI to your customers? IDC surveyed organizations running various workloads on Dell EMC software-defined hyperconverged appliances to identify and quantify the business impact. IDC’s analysis demonstrates that investing in Dell EMC VMware-based HCI, including VxRail and VxRack SDDC, contributes to $4.89 million additional gross revenue per organization per year from better addressing business opportunities and reducing downtime. What does that mean for channel partners? The ability to deliver the real business results that strengthen customer relationships and establish more profitable long-term partnerships.

Top Benefits for Channel Partners


Faster time to revenue. Demonstrate a proven five-year ROI of 489% with Dell EMC VMware-based HCI to shorten the sales cycle and realize faster time to revenue.

Fast-growing, margin-rich opportunity. HCI growth and business needs are driving rapid adoption of HCI solutions, with 85% of IT leaders indicating that their companies already use or plan to use HCI and 50% of current HCI users expected to expand their deployment.

Opportunity for value-added services. Deliver value-added services that ensure the uptime and protection of business-critical applications, such as backup, archiving, and recovery services.

Top Benefits for Customers


Eliminate data center silos and support more agile IT. Consolidate separate silos of compute, network, and storage down to a single software-defined solution.

Increase efficiency. Help IT infrastructure and application development teams operate efficiently and productively with more reliable and agile IT infrastructures.

Minimize unplanned downtime. Reduce the impact of infrastructure-related outages on business operations by up to 90%.

Realize significant business value. Deliver a total average annual value of $5.33 million per organization ($370,700 per 100 users) with an investment in Dell EMC VMware-based HCI.

Monday 7 January 2019

The Security – Automation Tango: Simple Approaches to a Robust IT Infrastructure Security

You need two to Tango. Apparently, so does a secure IT infrastructure.

Dell EMC Security, Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Materials

A thriving enterprise needs a modern datacenter to successfully meet its business objectives. A key pre-requisite for a modern datacenter is a robust infrastructure security. And, for a robust security to be effective, it needs to be intelligently automated.

The infrastructure security dilemma


At its core, every enterprise is a data business. And data is vulnerable to malicious actors. An average data breach is costing organizations between $3M – $5M1. The impact of these breaches is not just financial but also a loss of trust. Both internally and externally.

Enterprises have not had a lack of security tools. Multiple surveys have consistently shown enterprises have an average of 75 security tools. However, these tools struggle to work with each other or across the datacenter. This situation only gets worse. There is a looming shortage of security professionals with an estimated shortage of 3.5M skilled professionals by 20213.

Enterprises are at a dire crossroads. Critical IT infrastructure faces security risk. The current tools are inadequate. And there are not enough security professionals in the industry.

How are enterprises to conduct business in a safe, frictionless manner while protecting its business and customers?

Two to Tango


Successful enterprises have adopted two guiding principles to address this dilemma –

1. Integrate security deep into the infrastructure

To effectively integrate security into the infrastructure, one should start with the infrastructure components. One of the key  building blocks is  the server. The National Institute of Standards and Technology (NIST) has recommends system designers to adopt the Cyber Security Framework4. This way security can be built into each and every subsystem. This enables systems to identify, protect, detect, respond and recover from malicious activities when they occur.

2. Automate as much of this robust security as possible

Intelligent automation increases the efficiency and consistency of actions. Combining intelligent automation to the Cyber Security Framework makes for a robust IT infrastructure.

Dell EMC has adopted these two guiding principles for all their PowerEdge server designs. Based on the Cyber Security Framework, Dell EMC has developed a Cyber Resilient Architecture to protect servers against cybersecurity attacks. Every PowerEdge server is made safer with a Cyber Resilient Architecture and supported by impressive security and automation features. Let’s examine a few of these innovative features.

Securely protect from malicious activity


Every server undergoes routine BIOS and firmware updates. However, these routine maintenance activities present a vulnerability that malicious actors could take advantage of. To mitigate this, every PowerEdge server comes designed with an immutable silicon-based Root-of-Trust mechanism. This mechanism cryptographically verifies the authenticity of every firmware and BIOS update. A verification failure results in a rejection of the request and user notification.

A similar automatic verification is conducted when the server is booted up as well. Key routine tasks are quietly but effectively verified. There are several automated security features including Chassis Intrusion Alert, Signed Firmware Updates, and Supply Chain Assurance that are deliberately designed to protect the server infrastructure.

Diligently detect malicious activity


It is critical to determine if and when your servers are compromised. This requires visibility into the configuration, the health status of the server sub-systems. Any changes to BIOS, firmware and Option ROMs within the boot process should be detected immediately. To help automate this, PowerEdge servers employs iDRAC.

The iDRAC is a dedicated systems hardware, to comprehensively monitor the server and take remedial action depending on the event. For example, one of the interesting and automated security checks the iDRAC provides is Drift Detection. System Administrators can define their server configuration baseline based on their security and performance needs. iDRAC has the ability to detect deviance from the baseline. It also helps repair the drift with simple workflows to stage the changes.

System Administrators can proactively take action to keep their server infrastructure secure with multiple alerts and logs from iDRAC.

Rapidly recover from malicious activity


In the event of a security breach, it is critical for enterprises to limit the damage and rapidly get back to normalcy. PowerEdge servers have a few features to support swift restoration to a known good state. The BIOS and OS recovery feature uses a special, protected area that stores the pristine images. This helps servers rapidly recover from corrupted OS or BIOS images. Additionally, the iDRAC stores a backup BIOS image that ensures “automated” and on-demand Cyber Resilient BIOS recovery. System administrators can easily restore the servers back to its original state immediately following an adverse event.

If the server system needs to be retired or replaced, PowerEdge servers use System Erase to safely, securely and ecologically-friendly manner to erase sensitive data and settings.

A brief overview of PowerEdge Security and Automation

As the above examples highlight, robust security needs to be intelligently automated. And intelligent automation needs to have integrated security.

IT takes two to Tango.

PowerEdge servers come with a wide variety of such robust security and automated features including HW + interfaces (like TPM, SED drives) that the OS then uses to build an OS-level security infrastructure. IT Leaders have been referring to this popular guide to server security to calibrate their systems to best practices of keeping their critical infrastructure safe and secure. Does your critical infrastructure meet these considerations?