Tuesday, 27 February 2018

New Technologies Are Not a Threat, but the CIO’s Biggest Opportunity

EMC Tutorials and Materials, EMC Guides, EMC Learning, EMC Certification

As I travel around the region, it is incredibly valuable to meet with CEOs and CIOs where they live and do business, and hear directly from them what keeps them up at night. Whether in London or Bucharest, all share concerns about ‘the future’ and the role emerging technologies will play in transforming their business for the better – without throwing out what is working today with the bath water. Another common discussion point for both groups is the changing role of the CIO – how they are now seen as not only the person who is keeping the technology running, but as a key player in deciphering emerging technologies and identifying which innovation projects will help propel them forward – so they can disrupt before being disrupted.

The impact of emerging technologies on the way we run our businesses and the evolving relationship between humans and machines is something Dell Technologies has been exploring over the past year, with our latest ‘Realizing 2030’ global research project with Vanson Bourne surveying 3,800 business leaders forecasting the next era of human-machine partnerships and how they intend to prepare. The results were pretty unanimous with leaders agreeing we’re on the cusp of immense change, with 82% of those surveyed expecting humans and machines to work as integrated teams within their organization inside of 5 years. However, they’re divided over what this shift will mean for them, their business and even the world at large. To share just a few of these divided opinions:

◈ 50% of business leaders think automated systems will free-up their time – meaning the other half don’t not share this belief
◈ 42% believe they’ll have more job satisfaction in the future by offloading the tasks they don’t want to do to machines
◈ 58% don’t share this prediction. If they don’t change their opinion, they will keep doing tasks that could easily be automated and will continue to lack time for higher order pursuits that focus on creativity, education and strategy

Charting a course for the future given the rapidly changing environment is hard enough as it is. If business leaders have to deal with the polarizing viewpoints described above, then confidently making the right decisions to transform their business is going to be even more challenging. Fortunately, this is where the CIO can really come into his or her own. The ‘Realizing 2030’ research also revealed that business leaders do agree on the need to change and that emerging technologies like AI, AR and VR can be leveraged to speed up digital transformation.

So how can the CIO take these insights and demonstrate their strategic role in mapping out the direction the organisation needs to take?

◈ Lead with the technology. No one in the organisation knows as much about technology as the CIO. It is through innovative use of technology, namely software, that start-ups are disrupting established companies. A technologist to the bone, the CIO not only knows which technologies can be used to attack the company’s position, but can also play a leadership role in identifying how the company can use technology to pre-empt disruption or move the goalposts to their advantage. However, as I discussed in my previous blog on the seven habits of the effective hybrid CIO, the future forward CIO needs to have more than technology know how, but a deep understanding of the strategic business and financial goals of the company to turn that technology insight into a roadmap to the future that the board will buy into.

◈ Follow the data. If there is one thing leadership teams understand, it’s numbers and the CIO is the master of all data. It’s key to understanding customer behavior, to analyzing operational efficiency and improving customer service. The CIO can use this data to look backwards and forward, combining the advanced analytics of historical data with real-time data collection to tell a company where to go next. This also positions the CIO to be the best choice to set the metrics and KPIs which will better direct digital business transformation.

◈ Be human. There is a tendency when talking technology to be totally binary or metrics focused, but a key success factor in any organisation’s transformation is their people. So the CIO needs to balance driving change at the right speed, without going too fast and losing valuable resources along the way. The CIO needs to set the tone and clearly explain why change is necessary and what it will mean to the organisation – in fact, our research found the number one top tip to accelerate digital transformation from business leaders was to secure employee buy-in on a company’s digital transformation vision and values. Together with the CEO, the CIO will convince people of the vision for the future, showing the immense possibilities on the horizon

This is a great moment for CIOs to shine, both in translating emerging technologies into reality, and showing the strategic value that they can create. The role of the CIO is multifaceted and needs to look at every challenge and opportunity through different lenses. Every new technology begs a thorough investigation, both from a technological point of view and a business one. Does this emerging technology have staying power or is it just a passing fad? Can it be easily integrated into the overall architecture of the organisation? Will it drive forward our IT, security and workforce transformation? Can it help to differentiate our service offering in order to catapult the company to become a contender in the next era? These are the questions that need to be answered to make the right technology decisions for all organisations as they navigate this new era of emerging technologies, and the CIO is uniquely positioned to separate the hype along the way to hyper-growth.

Saturday, 24 February 2018

Software Defined Storage Availability (Part 2): The Math Behind Availability

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

In this blog we will discuss the facts of availability using math and demystify the myth behinds ScaleIO’s high availability.

For data loss or data unavailability to occur in a system with two replicas of data (such as ScaleIO) there must be two concurrent failures or a second failure must occur before the system recovers from a first failure. Therefore one of the following four scenarios must occur:

1. Two drive failures in a storage pool OR
2. Two nodes failures in a storage pool OR
3. A node failed followed by a drive failure OR
4. A drive failed followed by a node failure

Let us choose two popular ScaleIO configurations and derive the availability of each.

20 x ScaleIO servers deployed on Dell EMC’s PowerEdge Servers R740xd with 24 SSD drives each, 1.92TB SSD drive size using 4 x 10GbE Network. In this configuration we will assume that the rebuild time is network bound.
20 x ScaleIO servers deployed on Dell EMC’s PowerEdge Servers R640 with 10 SSD drives each, 1.92TB SSD drives using 2 x 25GbE Network. In this configuration we will assume that the rebuild time is SSD bound.

Note: ScaleIO best practices recommend a maximum of 300 drives in a storage pool, therefore for the first configuration we will configure two storage pools with 240 drives in each pool.

To calculate the availability of a ScaleIO system we will leverage a couple of well know academic publications:


We will adjust the formulas in the paper to the ScaleIO architecture and model the different failures.

Two Drive Failures


We will use the following formula to calculate the MTBF of ScaleIO system for a two drive failure scenario:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ N = Number of drives in a system
◈ G = Number of drives in a storage pool
◈ M = Number of drives per server
◈ K = 8,760 hours
( 1 Year)
◈ = MTBF of a single drive
◈ = Mean Time to Repair – repair/rebuild time of a failed drive

Note: This formula assumes that two drives that fail in the same ScaleIO SDS (server) will not cause DU/DL as the ScaleIO architecture guarantees that replicas of the same data will NEVER reside on the same physical node.

Let’s assume two scenarios – in the first scenario the rebuild process is constrained by network bandwidth – in the second scenario the rebuild process is constrained by drive performance bandwidth.

Network Bound


In this case we assume that the rebuild time/performance is limited by the availability of network bandwidth. This will be the case if you deploy a dense configuration such as the DELL 740xd servers with a large number of SSDs in a single server. In this case, the MTTR function is:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ S – Number of servers in a ScaleIO cluster
◈ Network Speed – Bandwidth in GB/s available for rebuild traffic (excluding application traffic)
◈ Conservative_Factor = factor additional time to complete the rebuild (to be conservative).

Plugging in the relevant values in the formula above, we get a MTTR of ~1.5 minutes for the 20 x R740, 24 SSDS @ 1.92TB w/ 4 X 10GbE network connections configuration (two storage pools w/ 240 drives per pool). The 20 x R640, 10SSDs @ 1.92TB w/ 2 X 25GbE network connections config provides MTTR of ~2 minutes. These MTTR values reflect the superiority of ScaleIO’s declustered RAID architecture that result in a very fast rebuild time. In a later post we will show how those MTTR values are critical and how they impact system availability and operational efficiency.

SSD Drive Bound


In this case, the rebuild time/performance is bound by the number of SSD drives and the rebuild time is a function of the number of drives available in the system. This will be the case if you deploy less dense configurations such as the 1U Dell EMC PowerEdge R640 servers. In this case, the MTTR function is:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ G – Number of drives in a storage pool
◈ Drive_Speed – Drive speed available for rebuild
◈ Conservative_Factor = factor additional time to complete the rebuild (to be conservative).

System availability is calculated by dividing the time that the system is available and running, by the total time the system was running added to the restore time. For availability we will use the following formula:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ RTO – Recovery Time Objective or the amount of time it takes to recover a system after a data loss event (For example: if two drives fail in a single pool), where data needs to be recovered from a backup system. We will be highly conservative and will consider Data Unavailability (DU) scenarios as bad as Data Loss (DL) scenarios therefore we will use RTO in the availability formula.

Note: the only purpose of RTO is to translate MTBF to availability.

Node and Device Failure


Next, let’s discuss the system’s MTBF when a node fails and followed by a drive failure, for this scenario we will be using the followed model:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ M = Number of drives per node
◈ G = Number of drives in the pool
◈ S = Number of servers in the system
◈ K = Number of hours in 1 year i.e. 8,760 hours
◈ MTBFdrive = MTBF of a single drive
◈ MTBFserver = MTBF of a single node
◈ MTTRserver = repair/rebuild time of failed server

In a similar way, one can develop the formulas for other failure sequences such as a drive failure after a node failure and a second node failure after a first node failure.

Network Bound Rebuild Process


In this case we assume that rebuild time/performance is constrained by network bandwidth. We will make similar assumptions as for drive failure. In this case, the MTTR function is:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ M – Number of drives per server
◈ S – Number of servers in a ScaleIO cluster
◈ Network Speed – Bandwidth in GB/s available for rebuild traffic (excluding application traffic)
◈ Conservative_Factor = factor additional time to complete the rebuild to be conservative

Plugging the relevant values in the formula above, we get a MTTR of ~30 minutes for the 20 x R740, 24 SSDS @ 1.92TB w/ 4 X 10GbE network connections configuration (two storage pools w/ 240 drives per pool). The 20 x R640, 10SSDs @ 1.92TB w/ 2 x 25GbE Network config provides MTRR of ~20 minutes. During system recovery ScaleIO rebuilt about 48TB of data for the first configuration and about 21TB for the second configuration.

SSD Drive Bound


In this case we assume that the Rebuild time/performance is SSD drive bound and the rebuild time is a function of the number of drives available in the system. Using the same assumptions as for drive failures, the MTTR function is:

EMC Guides, EMC Tutorials and Materials, EMC Learning, EMC Certifications, DELL EMC

Where:

◈ G – Number of drives in a storage pool
◈ M – Number of drives per server
◈ Drive_Speed – Drive speed available for rebuild
◈ Conservative_Factor = factor additional time to complete the rebuild to be conservative

Based on the provided formulas let’s calculate the availability of ScaleIO system based on the two different configurations:

20 x R740, 24 SSDS @ 1.92TB w/ 4 X 10GbE Network

(Deploying 2 storage pools w/ 240 drives per pool)

Reliability (MTBF)  Availability 
Drive After Drive 43,986 [Years]  0.999999955 
Drive After Node   6,404 [Years]  0.999999691 
Drive After Drive   138,325 [Years]  0.999999985 
Drive After Node   38,424 [Years]  0.999999897 
Overall System 4,714 [Years] 0.99999952 or 6-9’s

20 x R640, 10SSDs @ 1.92TB w/ 2 x 25GbE:

Reliability (MTBF) Availability 
Drive After Drive 105,655 [Years] 0.999999983
Drive After Node 27,665 [Years] 0.999999637
Node After Drive 276,650 [Years] 0.999999993
Drive After Node   69,163 [Years] 0.999999975
Overall System 15,702 [Years] 0.99999989 or 6-9’s

Since these calculations are complex, ScaleIO provides its customers with FREE online tools to build HW configurations and obtain availability numbers that includes all possible failure scenarios. We advise customers to use this tool, rather than crunch complex mathematics, to build system configurations based on desired system availability targets.

Thursday, 22 February 2018

Dell EMC and VMware – Better Together for Service Providers

Dell EMC Tutorials and Materials, Dell EMC and VMware, EMC Certificarionsa

With Mobile World Congress in Barcelona coming soon, there is a lot of anticipation on what vendors will be showcasing at the event and what will solve the challenges facing for service providers today.  Some of their main challenges include:

Slowing growth & innovation due to increasing technological complexity
Rising CAPEX and OPEX for legacy network infrastructure
Price/Margin erosion due to disruption in existing businesses models
Uncertainty in new business models – new value chains, new competitors
Operational transformation requires workforce and process retooling

Dell EMC Tutorials and Materials, Dell EMC and VMware, EMC Certificarionsa

Dell EMC and VMware are teaming together to help solve these challenges and are excited to demonstrate of the value of Dell Technologies end-to-end for service providers.

As a starting point, at Mobile World Congress Americas last fall, we announced the Dell EMC NFV Ready Bundle for VMware to simplify and accelerate NFV deployments for service providers.  This bundle includes open standards-based Dell EMC Cloud Infrastructure (compute, networking, Service Assurance Suite and management tools) and a choice of a Virtual Infrastructure Manager (vCloud Director or VMware Integrated OpenStack) with vSAN or Dell EMC ScaleIO.

The Dell EMC NFV Ready Bundle for VMware was just the beginning of joint, pre-validated solutions between both companies.  Dell EMC and VMware are on a journey to integrate and demonstrate the value of our joint solution in other areas as well.

SD-WAN is another area where both companies are working together to expand SD-WAN opportunities for service providers.  According to IDC’s Worldwide SD-WAN Forecast, 2017-2021, SD-WAN sales will grow at a 69% compound annual rate and will hit $8.05 billion by 2021.

Enterprises are increasingly searching for cost-effective and simpler alternatives to WAN connectivity for their sprawling branch networks. SD-WAN addresses many enterprises needs around WAN costs, simplified operations and improved application performance.  For service providers, offering SD-WAN as a Service is a new revenue opportunity because they can manage WAN services for enterprises.  This is appealing to enterprises that don’t want to manage the WAN network, or applications, and would prefer to outsource these services to a service provider.

To help with SD-WAN adoption, Dell EMC and VeloCloud, which is now part of VMware, offer the Dell EMC SD-WAN Ready Nodes for VeloCloud to accelerate SD-WAN revenue for service providers. VeloCloud can also go one step further by hosting and operating SD-WAN service on behalf of the service provider to accelerate adoption.  We will also be demonstrating leading edge joint SD-WAN solutions for service providers at Mobile World Congress.

We understand that the industry is constantly moving and evolving, and Dell EMC and VMware will continue to integrate capabilities that service providers want.  Our goals as both companies is to significantly reduce deployment complexities by offering more joint pre-architected and pre-validated solutions that integrates industry leading Dell EMC hardware and VMware software, reduce installation complexity, and provide confidence that the joint solutions are ready to work in production for service providers.

We look forward to seeing you at Mobile World Congress on Feb. 26 to March 1!  Please visit us in the Dell Technologies/VMware booth 3K10 in Hall 3 to learn more about our solutions for service providers.

Dell EMC Tutorials and Materials, Dell EMC and VMware, EMC Certificarionsa

Saturday, 17 February 2018

Creatives & Engineers – Understanding & Empowering Your ‘Workstation’ Customers

EMC Tutorials and Materials, EMC Guides, EMC Certifications, EMC Learning

As workplaces have evolved, so have the workforces that use them. Several distinct worker personas have emerged, each with its own demands for specific hardware, software and services. We think it’s time your customers knew more about them.

Thinking about how people work forces you to categorize them almost immediately. What’s their role? What components do they need to fulfill that role? By understanding Dell EMC’s personas, your sales team can quickly identify these different categories, helping them pick the technology that’s right for customers’ users.

Thinking even deeper, you can split personas into different groups, too. Creatives and engineers are two such personas, and are the most likely to use our workstation products.

Engineers


Driving industry transformation, this persona uses computer-aided-design and computer-aided-manufacturing software to create products. Engineers design the products that are integral for your customers’ development, and Dell EMC has a solution for each stage of their workflow.

Take the Dell Precision 7000 Series, for example, with Windows 10 Pro for Workstations. It has a dual-socket motherboard to allow for massive processing power, and it can support up to four NVIDIA Quadro or AMD Radeon graphics cards. Combine this workstation with the Dell UltraSharp monitor, and engineers get a fully immersive working experience. The same series in 2U rackmount provides a centralized workstation environment, and with that, customers can expect to get remote configuration, operating system deployment and health monitoring.

Swiss engineering and research company, GKP Fassadentechnik is solving complex environmental issues with its engineering models and sees Dell EMC as a strategic partner for its increasing productivity. Read the case study in our engineer persona guide to learn more.

EMC Tutorials and Materials, EMC Guides, EMC Certifications, EMC Learning

Creatives


Whether your customer is developing the latest blockbuster movie, creating an immersive virtual reality experience for a product launch, or editing 8k videos, an ISV-certified Dell Precision workstation with Microsoft Windows 10 Pro is the tool they can rely on. To bring their creations to life, however, these devices need to connect to render farms, and Dell EMC offers PowerEdge servers in an array of configurations, as well as switches to provide fast connectivity, and Isilon storage for sharing across multiple national or global sites.

In our guide to creative workers, we introduce Animal Logic, an Australian animation and visual effects company behind Happy Feet, The Matrix and The Great GatsbyIt has partnered with us for the last 10 years across its bases in London, Vancouver, Sydney and California. You can find out why by downloading the guide.

Our Approach


Technology has a huge potential to help organizations transform their workplaces, and by extension, transform their people’s working lives. We believe that approaching workers as personas is a critical part of workplace transformation, providing personalized products for how employees work today and in the future.

We’ll take care of the solutions, so you can take care of your customers.

EMC Tutorials and Materials, EMC Guides, EMC Certifications, EMC Learning

our new Digital Marketing Platform so that your marketing teams can quickly get these guides into the hands of your customers. The guides explain how to maximise the productivity of their employees through the right choices from our end-to-end portfolio.

Friday, 16 February 2018

Blockchain + Analytics: Enabling Smart IOT

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides

Autonomous cars are racing down the highway at speeds exceeding 100 MPH when suddenly a car a half-mile ahead blows out a tire sending dangerous debris across 3 lanes of traffic.  Instead of relying upon sending this urgent, time-critical distress information to the world via the cloud, the cars on that particular section of the highway use peer-to-peer, immutable communications to inform all vehicles in the area of the danger so that they can slow down and move to unobstructed lanes (while also sending a message to the nearest highway maintenance robots to remove the debris).

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides
Figure 1: “I, Robot” Movie Scene of Autonomous Robots Cleaning Up Road Debris

Real-time analytics at the edges of the Internet of Things and the real-time communications between devices of different types, models and makes are going to be critical to realizing the operational and society benefits of smart cities, smart airports, smart hospitals, smart factories and the like.

While machine learning and reinforcement learning at the edges and deep learning at the core are key to driving system-wide intelligence, the ability to capture, communicate, share and build upon the system-wide “learnings and insights” at the “local level” is going to require a technology that exploits real-time, nearest neighbor, peer-to-peer, immutable communications.  Hello, Blockchain!

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides

While there are many interesting aspects of Blockchain, the two that I find the most compelling for creating “smart” environments are:

◈ Peer-to-peer, which is the ability to share access to data and information (analytics) without the need for a central server
◈ Immutable communications, which is that the data and information is not susceptible to change.

Define Blockchain


A blockchain is a distributed file system in which blocks of information are linked together (“chained”) and secured using private key cryptography, ensuring only those with appropriate permission can edit the data. Because copies of the file are stored on multiple computer systems (distributed) and kept synchronized through the consensus of the network, they enable innovative solutions to problems involving tracking and ledgering transactions in a digital world.

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides

Bottom-line is that blockchain enables low-cost, peer-to-peer communications without the added expense (and latency) of a cloud infrastructure.

Precision Agriculture Use Case


A recent BusinessWeek article titled “This Army of AI Robots Will Feed the World” discusses the implications of micro-robots in the application of plant-by-plant farming.  From the article:

“The implication of plant-by-plant—rather than field-by-field—farming is not just the prospect of vast reductions in chemical usage. It could also, in theory, end mono-cropping, which has become the new normal—cornfields and soybean fields as far as the eye can see—and has given rise to the kind of high-calorie, low-nutrient diets that are causing heart disease, obesity, and Type 2 diabetes. Mono-crops also leach soil nutrients and put food supplies at risk, because single-crop fields are more susceptible to blight and catastrophe. Modern farmers have been segregating crops in part because our equipment can’t handle more complexity. Robots that can tend plants individually could support intercropping—planting corn in with complementary crops such as soybeans and other legumes.”

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides

As these micro-robots roll down the fields applying deep learning algorithms to make snap decisions about what pesticides and/or herbicides to use, and how much to use on an individual plant by plant basis, blockchain will play a critical role in ensuring communications between the micro-robots and the other farm implements of different types, models and manufacturers about the learnings or insights that are being gathered in real-time about crop health, damage and disease, including:

◈ Is there a pattern to the pesticide and herbicide needs of the plants?
◈ Do certain types of plants require more than normal pesticides and herbicides?
◈ Are there any potential pesticide or herbicide “drift” problems that are having unexpected impact on neighboring plants?
◈ Are there certain portions of the field where more or less crop damage is prevalent?
◈ Which pesticides and herbicides seem to be delivering the best results?

These kinds of real-time insights can help the micro-robots to optimize the plant-by-plant treatments to drive optimal crop yield at minimal costs.

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Certifications, EMC Guides

Role of Analytic Profiles or Digital Twins


What I find most interesting from a Big Data and Data Science perspective is the ability to build analytic profiles at the level of each individual plants.  The micro-robots applying pesticides and herbicides can capture detailed performance and behavioral insights at the level of each individual plant.  These performance and behavioral plant, pesticide and herbicide insights can be aggregated to provide substantial financial, operational and ecological benefits such as:

◈ Reduced use and overuse of pesticides and herbicides
◈ Improved predictability in crop yield
◈ Improved predictability in ideal harvesting time
◈ Reduce crops lost to disease and distress
◈ Optimize the mix of crops on a field-by-field basis
◈ Reduce the time and effort required to care for a field
◈ Save a whole bunch of money!

Blockchain + Analytics has the potential to convert almost any environment (airports, train stations, malls, highways, schools, hospitals, factories, stores) into a “smart” environment where the learnings and insights from normal operations can be quickly and securely shared with “neighboring” devices to create a self-learning, self-correcting and self-adjusting environment.

Wednesday, 14 February 2018

Five Reasons Mobile Devices Will Generate the Need for More Servers

DELL EMC Study, DELL EMC Tutorials and Materials, EMC Guides, EMC Learning

In 1987 when I read a book titled The Media Lab that described some forward-looking work at MIT, I came across the question: how will we directly connect our nervous systems to the global computer?  I remember wondering at the time what global computer they were talking about.  A few short years later a couple of things happened.  The internet became a household concept, and phones resembling the Star Trek communicators I saw on television as a child became a reality.  The concept of radio communication and its potential linkage to the phone system had been conceived much earlier in the twentieth century, but by the 1990s it became a commercial viability for everyday consumers.  The buildout of all of this infrastructure exploded, and now the internet is accessible by user endpoints all over the globe.  The internet has become ‘the global computer’ and the wireless infrastructure has become part of the answer to how we connect to it—from anywhere.  Extending this trend of mankind’s history into the future, we can only expect that this growth will continue for decades and beyond.  Although I made the transition from radio engineer to computer engineer long ago, I retain my optimism and interest in the wireless industry.  With the inevitable progression of technology to enable richer experiences and services over the air, there are some things to anticipate for the server usage.  Here are five reasons why I am bullish on the impact of wireless industry on the computer industry.

Backend Datacenters


The increasing number of mobile devices are using their wireless access to connect to something.  Whether it’s streaming video, daily news, online music, or ride sharing services—the volume of traffic is growing.  To support this traffic, the data and intelligence must be hosted somewhere on computers.  Servers are pervasive in datacenters of all sizes and locations, supporting workloads varying from content delivery to data analytics.  Some of these services are hosted in the public cloud, but many are also hosted in private datacenters whose tenets include security and greater control over their computing performance.  A mix of deployments are likely to continue for the foreseeable future.

NFV


Network Functions Virtualization, proposed in an influential 2012 whitepaper, has led to a migration of functionality from custom equipment to standard servers and switches.  Equipment that may have been realized with ASIC-based hardware in the past can now be implemented in software on off-the-shelf servers, controlling costs and easing lifecycle maintenance.  The software packages that have been implemented are called virtual network functions (VNFs).  According to the original vision, these VNFs can be migrated and scaled to accommodate changes in network usage, just like cloud-native applications at SaaS hosters and “webtech” companies.  This however does not preclude software from being delivered either in containers or run as processes on bare metal servers as performance requirements dictate, which again leads back to more usage for servers.  The core network subcomponents in EPC and IMS that support the mobile networks are key targets for the virtual network functions.  As global wireless infrastructure grows to support demand, supporting core networks increase in number and house more servers.

Cloud Radio Access Networks


The concept of Cloud Radio Access Networks brings us deeper into the metaphorical forest—exactly how will carriers be able to accommodate an increasing number of users with an increasing demand for more and more bandwidth, while still delivering a decent quality of service and controlling their costs?  The industry has to solve this problem within the bounds of the following constraints: the radio wave spectrum is finite, and deploying towers and associated equipment is expensive.  Historically, equipment at or near the cellular antenna sites performed the packet and signal processing necessary to receive and deliver end user data across the backhaul networks.

With CRAN, this functionality is split between a BBU (baseband unit) and RRH (remote radio head).  A Centralized Unit (CU) will host the BBU at a network edge site within periphery of multiple antenna locations and their remote radio heads.  This would be less expensive than putting a BBU at every location, but it comes with some additional interesting benefits.  CoMP (coordinated multipoint) reception and transmission can be achieved, bringing better utilization of the network by providing mobile devices with connections to several base stations at once.  Data can be passed through the least loaded stations with some real-time decisions from the centralized unit.  Similarly, devices can receive from more than one tower while in fringe areas, and centralized traffic decisions can lead to fewer handover failures.  There is also a potential cost savings by alleviating the need for inter-base station networks.  This implementation also allows for reconfiguring network coverage based on times of peak needs like sporting events.

Why is this relevant?  Because that CU is a server running software that implements the BBU.  There is a lot of possible variability in this implementation, but same concept applicable to NFV is pertinent for radio access networks as well.  Off-the-shelf equipment has an economy of scale that will be favored over the development of custom specific-purpose appliances.  As 5G cellular is deployed and new coverages are created, CRAN will become widespread.

Mobile Edge Computing


Mobile Edge Computing is another inevitable phenomenon that will be increasingly developed as wireless usage grows.  There is already a concept today within the world of datacenters in which “points of presence” are deployed to allow acceptable response times for internet users.  For example, OTP (“over the top”) video services such as streaming movies are cached at locations near to the users.  MEC entails a buildout of sites (lots of them) close to the consumers of data.  Autonomous vehicles would not tolerate latency of downloading maps from a location six thousand miles away—instead, a local computing site would support this type of workload.  Many evolving workloads including virtual reality and mobile gaming will benefit from mobile edge computing.

The question (and difficulty) for service providers will be where to deploy these computing locations.  This will occur over time and will result in a hierarchical network with more layers than the networks of today, which is part of the reason an analogy has been made to the distributed nervous system of an octopus.  The concept of “far edge” locations will be introduced which house servers to host workloads and data in support the emerging wireless uses.  Some of these locations contain existing buildings with environmental constraints, but that does not necessarily preclude installation of ruggedized servers.  Perhaps more appealing to service providers, instead of investing in new brick-and-mortar sites, modular datacenters of all shapes and sizes can be dropped into place as “green-field” solutions.  These prebuilt datacenters as small as one rack can be installed capable of providing fresh air cooling to standard servers even in warm environments.

Reconfigurable Computing


With all of these emerging workloads running on servers to support mobile wireless users, performance and packet latency can become an issue.  There have been early adopters of this type of computing model in the high-frequency trading industry, where companies have deployed FPGAs (field programmable gate arrays) as PCIe cards inside their servers.  These devices, when programmed for specific tasks, can offer faster operations compared to CPUs running generic instruction sets.  Incidentally, FPGAs have been widely utilized in the telecommunications industry for a long time.  Returning to the CRAN example, FPGAs can be used to implement FEC (forward error correction) algorithms required by the standard 4G and 5G cellular protocols, offloading the CPUs and accelerating the packet processing for these workloads.  These devices will extend the capabilities of standard servers, further extending their value in this industry.

So the “server industry” is not just one industry; it is a cross-section of tools and equipment applicable to many sectors including wireless communications.

Sunday, 11 February 2018

Get Modern with Enterprise-level Data Protection for Small, Mid-Size and Enterprise Remote and Branch Office Environments

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

Today, Dell EMC is expanding its modern protection storage portfolio with Dell EMC Data Domain DD3300, a new platform specifically designed to deliver enterprise-level data protection to small to mid-size organizations and remote/branch office environments of larger enterprises.

Why Data Domain DD3300 and Why Now?


Organizations today face many challenges and conflicting priorities including data growth, supporting an ever-growing number of applications, an increasingly stringent regulatory and compliance environment, and continuously shrinking budgets. For organizations of all sizes, small to large, it is paramount for business success, more than ever before, that their data is protected and that they can easily leverage cloud for flexibility agility and economics.

Achieve Enterprise-Level Data Protection Without an Enterprise-Size Data Center


Data Domain DD3300 is purpose built for modernizing data protection of small to mid-size organizations and enterprise remote/branch office environments. DD3300 is simple and comprehensive, cloud-ready, and offers multi-site scalability.

DD3300 offers Dell EMC’s comprehensive data protection capabilities, including inline encryption and DD Boost for faster backups and lower network usage.  DD3300 also  provides coverage for a wide application ecosystem, from enterprise to homegrown applications. A 2U appliance that enables you to start small and expand capacity as your needs increase, and with an average data reduction rate in the range of 10-55x, the DD3300 will provide an impressive ROI along with dramatic cost savings, bringing greater scalability and significant reduction in WAN bandwidth use for backups and recovery.1

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

Modernize Your Data Protection With Simple Extension to the Cloud


To enable smaller IT environments to simply extend to the cloud for long-term retention, DD3300 supports Data Domain Cloud Tier.  With DD3300, organizations can natively-tier deduplicated data to the cloud for long-term retention without the need for a separate cloud gateway or a virtual appliance. This new compact Data Domain delivers cost-effective long-term retention on a wide cloud ecosystem including Dell EMC Elastic Cloud Storage (ECS) and major public clouds.

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

DD3300 also supports Data Domain Cloud Disaster Recovery service. In conjunction with Dell EMC Data Protection Software, the Data Domain Cloud Disaster Recovery enables virtual machine images protected on Data Domain DD3300 to be copied to object storage within the public cloud for a modern, cost efficient disaster recovery solution that takes advantage of the cloud.

Protect Data Wherever It Resides With Multi-Site Scalability


With elegant, multi-site scalability with the included capabilities of Data Domain Replicator, DD3300 provides fast, network-efficient and encrypted replication from remote offices to the central data center. It only transfers deduplicated data across the network, eliminating up to 98% of the bandwidth required.2 With its compact design, DD3300 is ideal for multi-site scenarios with remote and/or branch IT environments that house data separate from the central IT environment.

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

Get Even More With Dell EMC Data Protection Software


To maximize the return on investment and get the most out of Dell EMC’s advanced deduplication, Data Domain DD3300 can be paired with our modern Data Protection software. With DD3300 and Dell EMC Data Protection software, customers can amplify the logical capacity and cloud capabilities, benefit from an intuitive user-friendly interface for simpler management, and take advantage of advanced VMware automation and integration capabilities.

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

Whether securing data of a small to mid-sized business or a department of a Fortune 100 company, Dell EMC Data Domain DD3300 provides dependable protection.

DELL EMC Study, DELL EMC Guides, EMC Learning, EMC Tutorials and Materials

Dell EMC Data Domain 3300 is generally available through Dell EMC and channel partners.

Thursday, 8 February 2018

2018 Brings Once-in-a-Generation Shift in Data Storage

Dell EMC Guides, EMC Tutorials and Materials, Dell EMC Certifications

I have to be honest. Much of what I predict to be significant for the data storage industry in 2018 may go unnoticed by most IT professionals, even though some of these concern major progress that hasn’t been experienced in nearly a full generation!

So much has been happening in the development of exciting new storage-class memory (SCM) that will help make arrays faster and more efficient in processing general purpose workloads, and this year I believe we’ll see a tangible payoff with the mainstreaming of this technology. This big news is all about Non-Volatile Memory Express (NVMe) and the emergence of this highly anticipated interface within commercial storage arrays. With Artificial Intelligence and Machine Learning seemingly all the rage in IT these days, I expect to see emerging and increasingly practical use cases coming to commercial storage offerings that attempt to further automate storage management, taking concepts such as intelligent data placement and performance tuning to the next level. And momentum is building for purpose-built data center hardware that provides optimized data paths designed to accelerate specialized workloads.

But enough with the introductions, let’s dive into the details!

Prediction #1: NVMe Will Go From Promise to Productivity in Commercial Storage

A combination of lower cost components being produced in larger volumes by multiple suppliers and a fully-developed specification will finally propel NVMe towards mainstream adoption in enterprise storage. With both storage and servers NVMe-enabled, enterprises will be offered compelling choices that take advantage of a more efficient hardware/software stack that delivers a performance benefit trifecta: low latency, reduced CPU utilization and faster application processing to accelerate performance over that of non-NVMe flash SSDs.  While NVMe-enabled storage arrays will get an initial boost in performance, the true potential for NVMe will be realized later in 2018 when next-generation SCM becomes available (more on that below).

Although NVMe-based flash drives and a handful of storage arrays have been offered for several years, they have typically been 30-50% more expensive than equivalent All-Flash arrays. At this kind of premium, the jury was out on whether these NVMe products were worth the price, since most enterprises wouldn’t have noticed a cost-effective difference in aggregate for general purpose workloads. However, that is changing. Technology maturity means volumes are rising, component costs are coming down and multiple NVMe SSD suppliers are finally ready for prime-time.

We’re only at the beginning of the NVMe innovation cycle. This overnight success has been 10 years in the making, with both Dell and EMC playing prominent roles along the way. In 2018, our industry vision, intellectual property investment and long-term strategic supplier partnerships will pay off. Although a few proprietary NVMe storage products were launched in 2017, broad mainstream solutions will require long-term commitment and dedicated investment to keep up with the latest advances in flash and SCM. We’re ready.

Another underappreciated aspect of NVMe is lower CPU utilization. The NVMe software stack executes fewer instructions. It’s highly optimized for the parallelism of contemporary multi-core processors. With lower CPU utilization for storage, you will have more of your server available to run your applications, which translates to better TCO, improved infrastructure efficiency and software license cost reduction. This kind of performance advantage will be highly sought after by organizations running OLTP and real-time analytics.

Prediction #2: NVMe Over Fabrics (NVMeOF) Will Continue to Emerge and Develop

Hardly anyone will be adopting NVMeOF for production until the industry gets a rich, interoperable set of ecosystem components.  However, we will see incremental progress in 2018, first with NVMeOF Fibre Channel for the incumbent SAN, and then with NVMeOF Ethernet solutions for next-gen data centers. It’s all in line with the development of new interfaces and new storage protocols, but none of it will happen overnight. We need the ecosystem to come along, with new switch ports, software stacks, new host bus adapters (HBAs), etc. In order for adoption to grow, all these factors will need to be developed into a full-fledged ecosystem. As a practical example, at Flash Memory Summit 2017, there must have been a dozen different NVMeOF solutions announced, and I’d guess that no two interoperated with one another. That just reflects the current state of development. I’m a believer and a champion, but it’s early days still. When NVMeOF does hit primetime, watch out. Vendors who do the homework to vertically integrate NVMe in their network, compute and storage products will be at an advantage to offer an incredible performance package for organizations looking to super-charge their SANs. Early adopters will likely be in the HPC, scientific and Wall Street high-frequency trading domains, though enterprises upgrading to modern data centers running cloud-native, IoT and AI/ML applications won’t be far behind.

Prediction #3: Storage Class Memory (SCM) for Storage Will Become a Reality in 2018

Our industry has largely been about spinning hard drives and dynamic random-access memory (DRAM) forever. In 2008, Flash came in and pushed out hard drives as the leading storage media type. In 2018, for the first time in a generation, there are several viable emerging memory candidates in this space. First is Intel with 3DXP. They dipped their toe in the water last year and 2018 is when it becomes a part of mainstream storage architectures. This new SCM should operate in the 10-20 microsecond realm instead of the 100-200 microsecond range for flash. This 10x performance improvement will manifest as both storage cache and tier to deliver better, faster storage.

Of course, low latency applications, such as high frequency trading, will benefit tremendously from SCM. However, SCM is not just for the top of the pyramid workloads and lunatic fringe; the average enterprise will benefit anywhere the equation “Time = Money” comes into play. SCM will be leveraged for real-time risk management – at any time, your most important data needs to be accessed at the lowest possible latency.  And it’s still just the beginning. We don’t get to see a completely new media technology every day. As Pat Gelsinger, CEO of VMware, once said, “There have only been four successful memory technologies in history and I’ve seen over 200 candidates to become the fifth.” The fifth is here, and there are more to come.

Prediction #4: Advancements Will Be Made Around Artificial Intelligence and Machine Learning (AI/ML) in Storage

As an industry, we have been using machine learning techniques to tier data and implement unique solutions in storage for years. Take for example the “Call Home” functionality in VMAX. Our products send regular/high frequency telemetry of all aspects of our storage platforms to Customer Service. This data is analyzed for patterns and anomalies to proactively identify situations before they become problems.  We’re flattered that this approach has been imitated by others, such that now it is a best practice for the industry. Another win for customers.

For 2018, we’ll be seeing AI and ML integration accelerate. Intelligent data tiering will go finer-grained; we’ll see management of more types of media, such as SCM for example – and we’ll be designing in the use of new forms of hardware acceleration to enable that. We will adapt and adopt the latest innovations from the semiconductor processor world, such as graphics processing units (GPUs), tensor processing units (TPUs) or field-programmable gate arrays (FPGAs) to enable autonomous, self-driving storage.

New array applications for AI/ML capabilities will come into play in different ways. Consider the array dynamics when a new workload comes along. AI/ML spring into action, drawing upon telemetry from not only this particular array, but from the cloud-based analysis of all similarly configured arrays to derive the optimal configuration to accommodate this new workload without impact to existing applications.  AI/ML turns the global experience pool of all workloads on all storage arrays into an automated tuning subject matter expert.  Today’s capabilities of CloudIQ and VMAX Call Home are just the beginning. Eventually the idea is that we’ll be able to use cloud-based AI and ML to fully automate the operation of the storage. This will mean storage systems that do more of the data management themselves, enabling organizations to shift dollars away from today’s IT maintenance budgets over to tomorrow’s Digital Transformation initiatives.

Prediction #5: Pendulum Will Swing Back to Heterogeneous Infrastructure to Accommodate Specialized Workloads

Five years ago, the typical view of the data center would be centered on row upon row of identical x86 servers. These servers, automated and orchestrated as a cloud, delivered a standardized set of IaaS and PaaS capabilities for the vast majority of IT workloads. Today,

we’re seeing rapid growth in a new class of algorithmic workloads that are often better suited for specialized processors rather than general purpose homogeneous hardware. Purpose-built hardware often runs these workloads significantly faster and consumes an order of magnitude less power than general purpose compute running software-only solutions. This means that optimized infrastructure architectures will need the ability to deploy business solutions that take advantage of rapid advances in algorithmic processor technology, while keeping all the software-defined flexibility and agility of hybrid and private clouds. Think of this as “software-defined control plane meets hardware-optimized data pipeline.” This architecture may exploit GPUs for machine learning, FPGAs for custom functions, and offload engines for algorithmic data services such as dedupe, compression and encryption.

These raw materials eventually become services, delivered across a low-latency datacenter fabric. This forms the architectural substrate for truly composable infrastructure. Our job will be to manage and orchestrate the dynamic provisioning of those IT services. We’ll start seeing these new capabilities delivered as POCs in 2018.

2018 will be an exciting time for IT infrastructure and our customers. The contemporary IT Architect will have an unprecedented set of capabilities at their disposal, from new processing models, a new class of storage media, and advances in system and data center interconnects. This is especially the case in storage.  With major technology advancements in both capability and affordability on the horizon, 2018 will be a year where we can truly expect to do more for less.

Saturday, 3 February 2018

Unlocking “Out of Office” Personas to Drive Customer Demand

Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Study

Dell EMC is committed to helping businesses maximize the potential of their employees via the technology they use. And because we can offer more personalized products to suit a range of in- and out-of-office environments, that potential is more achievable than ever.

So, with that in mind, let’s review two of our fastest growing personas that work outside the office: remote workers and “on-the-go pros”. Each requires different solutions to achieve its potential.

Remote Workers


It’s not difficult to imagine where you’d find a remote worker: anywhere but the office. They might operate at home, abroad or in a local café. The range of devices they use is as broad as the number of locations they work from. They need technology that can keep up with their pace.

The Dell Latitude range fits the bill. It has powerful notebooks with essential features. The mainstream 5000 Series offers high performance, and the premium 7000 Series is the ultimate portable solution because of its long battery life. And they’re all designed to run Microsoft Windows 10 Pro. But the remote worker’s home-office experience is also critical. Dell Wyse thin clients or Dell OptiPlex desktops can provide a fully managed desktop experience and can be supplied as a complete solution via the Dell VDI Complete service. Their home becomes their home office.

Check out the remote worker guide to learn how The University of Massachusetts manages its own out-of-office student network with vLabs, its VDI infrastructure.

Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Study

On-The-Go Pros


Because on-the-go pros spend most of their time out of the office, one of your customers’ key concerns for these employees is security.  With Dell EMC, you can enable these employees to succeed, safe in the knowledge their devices and data are secure.

These users need efficient but feature-rich experiences, and nothing beats the mobile device. The Latitude 2-in-1 range offers Windows 10 with built-in security. VMware Workspace ONE can be layered on top of an on-premises server provision as well as via the public cloud. Either option satisfies the user’s need to have the same access wherever they are while offering robust security. What’s more, RSA’s NetWitness Suite provides immediate detection and response to any threat, anywhere.

Carnival Corporation is one of the most popular cruise brands in North America and is a great case study of how to provision a complex IT environment. Each vessel acts as a remote office, and its mobile workers connect to Carnival’s global

ecosystem. Read about their solution in the on-the-go pro guide.

Our Approach


Technology has a huge potential to help organizations transform their workplaces, and by extension, transform their people’s working lives. We believe that approaching workers as personas is a critical part of workplace transformation, providing personalized products for how employees work today and in the future.

We’ll take care of the solutions, so you can take care of your customers. Read the On-the-Go Pro and Remote Worker guides, as well as others, here.

Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Study

We’ve also created related emails here, on our new Digital Marketing Platform so that your marketing teams can quickly get these guides into the hands of your customers. The guides explain how to maximize the productivity of their employees through the right choices from our end-to-end portfolio.