Saturday 31 August 2019

How A Mining Technology Company is Transforming into an Analytics Company

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials
How do you turn your idea into a market-ready product or service? How do you move a prototype into production? How do you scale to mass production while maintaining high levels of customer support? If you’re an existing company, how do digitize your assets and develop a different customer model that provides new services and drives additional revenues? These are the burning questions that keep innovators and business leaders awake at night.

Focus on what you do best


As the VP of the Dell Technologies OEM | Embedded & Edge Solutions Product Group, success, in my view, is down to the right partnership and division of labor. No one person can be an expert at everything. My advice is focus on what you do best – generating great ideas or running your core business – and recruit the right partners to help do the rest.

The Weir Group’s digital transformation is a classic case in point. An engineering company founded in the 1870s, Weir is now increasingly seen as an expert in both mechanical technology and analytics.

Weir’s new value proposition


Of course, providing the best mining equipment continues to be at the heart of Weir’s offering but it is no longer the only differentiator. Weir’s enhanced value proposition is focused on process optimization and predictive maintenance, where the company can remotely monitor the performance of customer equipment and use this data to schedule preventative maintenance.

Avoid downtime & gain new insights


This approach also means that customers avoid costly, unscheduled downtime as well as gaining better insights about the likely timing of equipment replacements. You can read more about Weir’s digital transformation story here. In this blog, I want to share some personal insights as to what we did from an engineering perspective to help bring Weir’s vision to life, and share my top tips for successful digital transformation.

Together is better


For starters, I personally loved the fact that every partner involved in the project contributed to the ultimate solution. What we designed together reflects not only Weir’s IP, but also our expertise and the expertise of our partners and suppliers. The solution really was bigger than the sum of its parts. Together, we developed something that none of us could have done alone.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials

The customer is all that matters


That brings me to another important point. While Dell Technologies might be one of the biggest technology companies in the world, we know that we don’t always have all the answers. We’re happy to bring in other expertise as required, work with other partners and generally just do what’s right by the customer.

Open minds


We all came to this project with open minds, no pre-conceptions and a willingness to listen and learn. We had to understand Weir’s business and what it was trying to achieve.

While Weir had developed an in-house prototype, we started by sitting down, discussing what was needed and working together on an architectural design. Apart from engineering services from my group, we also involved colleagues from Supply Chain, Operations, Materials, Planning, Customization, Operations and Dell Financial Services. We were there for the long haul, providing value at every step along the way. It was a real partnership from the outset.

Innovation takes time


It’s important to say that the entire process – from initial idea and proof of concept to the first production unit leaving the factory – took two years. That wasn’t because any partner was being slow or unresponsive; the reality is that figuring out how to make a production level, industrial-grade, top-quality solution takes time.

A shared understanding


In many ways, we understood first-hand what Weir was trying to achieve. Many people still perceive Dell Technologies as a commercial, off-the shelf box provider. And, it’s true – we do that very well, but we have also evolved into a full services and solutions provider. So, we know all about setting a vision and working to transform what we do and how others perceive us.

Transforming business models


For Weir, the good news is that production roll out is underway. As one of our most complex IoT projects to date, I believe that this new technology solution has the potential to transform Weir into a digital engineering solutions company, revolutionizing how the company serves its mining, oil and gas and infrastructure customers and opening up possibilities for new products and services.

Thursday 29 August 2019

Dell EMC PowerProtect Cloud Snapshot Manger, SaaS Data Protection for Cloud Native Workloads

It has been a busy summer and we are excited to share what we have been up to! Dell EMC Cloud Snapshot Manager is now part of the PowerProtect Software Family! Moving forward, Cloud Snapshot Manager will be called Dell EMC PowerProtect Cloud Snapshot Manager. As part of this release we have added some new features to CSM.

The protection of cloud native workloads is becoming more of a challenge. Cloud providers only offer rudimentary tools for the creation and deletion of snapshots. This can be cumbersome when customers have many cloud accounts and a large number of workloads to protect. Many enterprises who have adopted a multi-cloud strategy are facing a challenge in protecting workloads across multiple clouds in a uniform and seamless manner. Using different tools for each cloud is unmanageable and costly. As a result, customers are faced with being vulnerable with regards to protection of their workloads in the cloud and are bearing high monthly cloud costs as snapshots have proliferated in their environment.

Dell EMC Certifications, Dell EMC Learning, Dell EMC Tutorials and Materials, Dell EMC Online Exam

Dell EMC has a SaaS offering catered to solving this growing issue. For those of you that are not familiar with CSM, Cloud Snapshot Manager is a SaaS solution making it easy for customers to protect workloads in public cloud environments (AWS, Azure) – without requiring installation or infrastructure. Customers can discover, orchestrate and automate the protection of workloads across multiple clouds based on policies for seamless backup and disaster recovery.

Dell EMC Certifications, Dell EMC Learning, Dell EMC Tutorials and Materials, Dell EMC Online Exam

CSM breaks cloud silos, allowing customers to use one tool to protect workloads across multiple clouds. Designed for any size cloud infrastructure, CSM scales as your organization and data grows. The automatic assignment of resources to protection policies based on tags is essential to achieve auto-scaling in the cloud with the peace of mind that your resources are protected.

In this quarters release we are proud to announce the following features

◈ CSM now supports the protection of Blob storage containers in Azure with granular Blob object level recovery. More and more cloud native applications are taking advantage of the Blob storage containers — you must ensure that the data is protected and easily recoverable.

◈ Expansion of the REST API to facilitate greater automation

PowerProtect Cloud Snapshot Manager has also been added as a promotion for new PowerProtect Software, Data Protection Suite, and DPS4VM customers. New customers are entitled to 20 instances of CSM for 6 months, free! Talk with your account rep to learn more about this promotion!

Wednesday 28 August 2019

5 Pitfalls to Avoid During your Cloud Migration

A purpose-built cloud operating model needs to be an integral part of every organization’s technology strategy. Some organizations have implemented a cloud operating model and made substantial cloud investments. Many, however are still in the early phases of their cloud journey.

Dell EMC Cloud Migration, Dell EMC Certifications, Dell EMC Online Exam, Dell EMC Tutorials and Materials

Early cloud adopters sometimes moved with much exuberance towards the cloud and paid the price when the wrong workload was moved; either due to escalating costs, poor performance, or service levels. As a result, they’re rightsizing their clouds and repatriating the workloads that run better in their data center.

Cloud isn’t going away; it’s only going to become an even bigger part of organizations’ IT strategy. And, in most cases, poor cloud experiences are the result of poor planning and execution. For this reason, it’s important to be thoughtful in how you approach your cloud journey. With that in mind, let’s look at the pitfalls to avoid as you progress through the stages of cloud adoption.

1. Not aligning with business objectives


If the organization prioritizes expediency over practicality, and cloud investments are made without consideration for your business model, or organization structure, you are on a surefire path to cloud failure. Additionally, unrealistic expectations about how quickly users can adopt new processes or technologies can create misalignment and jeopardize the project. The greater the change, the more the tendency to disrupt the flow of business. IT Ops should take a leadership role and help ensure that the organization’s cloud strategy accounts for the needs of both business and IT stakeholders in a cohesive way.

2. Making sweeping mandates instead of incremental changes


For years “cloud-first” and “we’re getting out of the data center game” were the rallying cries for many companies. But the reality is that on-premises is still a better option for many workloads. Per ESG, nearly nine of ten organizations expect most (35%), or at least half (54%) of their applications/workloads to be running on-premises in three years.

Instead of moving applications to the cloud wholesale, look at each application, and assess whether it is a good candidate for cloud or not. Important considerations include its service level requirements, how the business uses it, and regulatory requirements. For most organizations, public cloud, private cloud, and edge infrastructure should work together as part of a cohesive IT strategy. By taking this prescriptive approach to cloud, you can deliver the right experience out of the gate and reduce the churn and rework often associated with repatriation.

3. Creating disjointed experiences for developers and IT


With some applications and data living on-premises, and some in the cloud, it is very easy to create a situation where both developers and IT operations are working in a series of disconnected siloes. By having two sets of processes and tools, it becomes more difficult for teams to work together, and establish best practices for both environments. This creates a rift between environments, decreasing the pace of innovation, and increasing the costs of managing and developing across your entire IT landscape. Approaching IT from a hybrid perspective and establishing a common infrastructure and management experience across these disparate environments can go a long way to delivering consistent experiences.

4. Wasting existing skills and investments


Reducing the data center footprint, revamping operational models, and adopting new technologies sounds appealing to many organizations considering a cloud migration. Essentially by starting from scratch, the hope is to eliminate complexity and technical debt. The problem is that this wastes all the skills, processes, and infrastructure investments that your organization has spent years accumulating. This waste means onboarding unfamiliar practices and can create exposure to risk as the organization struggles to consume these new technologies and surround them with the right skillsets. Finding a way to leverage your existing investments in conjunction with cloud services can greatly reduce the amount of retraining and additional upfront investment that is required during this transition. By relying on your strengths and incubating new technologies or operational paradigms, you will move a little slower, but also in a more controlled and secure manner.

5. Being trapped in a single cloud


Before making a move, it’s important to ask yourself what you’re going to do if it doesn’t work. Different workloads work better in the cloud vs. on-premises, on one provider’s cloud vs. another, etc. Looking at the hyperscaler options that are available, you would be hard-pressed to pick a clear-cut victor, each will dominate the landscape for many years, and will find their niches. For this reason, you must figure out how to make applications portable and how to protect critical data in such a way that you won’t experience data loss or be prohibitively penalized for moving it. It’s important to note that the concept of data gravity will greatly impact your ability to move later, so having a strategy on the front end that rationalizes all cloud investments is critical. Thinking through these risk factors and creating a contingency plan will ensure that you’re at least going in with your eyes open.

In Closing


I hope that you find these tips helpful. When we were looking at our cloud strategy for Dell Technologies Cloud, we saw organizations running into many of these challenges. That’s why we built it—to help companies avoid these pitfalls and thrive in this multi-cloud world. We’ve partnered with some of the largest hyperscale cloud providers to offer choice, built an offering that removes a lot of the complexity, and designed professional services to help our customers decide where to go and what to move. Ultimately, our goal is to be your partner, so we can help you get away from generic approaches and focus on defining and executing your winning cloud strategy—no matter what that looks like.

Tuesday 27 August 2019

Transforming Network Infrastructure for Cloud-Optimized 5G Services

Dell EMC Study, Dell EMC Tutorials and Materials, Dell EMC Certifications, Dell EMC Online Exam

Imagine the future – a scalable, composable and automated network infrastructure that meets the high-performance needs of tomorrow’s 5G services and makes it easy to build, deploy, manage, and provide assurance for new end-to-end applications. The new era of 5G networks will see technology and operational transformation, but also business innovations that result in intelligent applications consuming and generating data like never before. These intelligent applications will introduce a new set of requirements – latency, bandwidth, capacity, coverage – that drives a transformation of the entire end-to-end architecture, including the compute, network, and storage infrastructure. What used to be possible only in science fiction movies – flying drones, driverless cars and planes, machine-to-machine interactions, seamless communication around the globe – is fast becoming a reality.

Since 2012, the industry has experienced acceleration of changes leveraging the capabilities of compute, network, and storage virtualization to drive new capital and operational models, deliver new services, and improve overall service delivery economics. The new operational imperatives for Communications Service Providers (CoSPs), can largely be captured into three core technology shifts:

◈ Leverage disaggregation of hardware and software stacks to shift workloads towards general purpose compute, such as Intel architecture.

◈ Decouple core infrastructure and network services from applications and protocols, and expose those services as a platform to applications.

◈ Develop a set of information models, data models and APIs to transform operations from bespoke processes and associated infrastructure scripts, to more unified automation frameworks that allow service providers to develop DevOps-style operational processes.

Every decade, the mobile industry goes through a major upgrade cycle of their network architecture – from the Radio Access Network (RAN) to the Packet Core – to meet the insatiable demand of smart mobile devices and the new generation of applications and services.

Dell EMC Study, Dell EMC Tutorials and Materials, Dell EMC Certifications, Dell EMC Online Exam

The impending 5G transition, with significant advances in bandwidth, latency and quality of service, will unleash a new wave of services including enhanced mobile broadband, connected cars, drones, smart retail, industrial robots and much more.

Why a Software-Defined Infrastructure Is Critical to the 5G Transition


The advent of smart everything and the growing amount of data traffic is already putting immense pressure on the network infrastructure. The CoSPs – aka telco operators – have a business challenge: How do they deal with this increased traffic efficiently at a lower cost and allow for rapid deployment of end to end services to grow business.

This motivates the CoSPs to aggressively transform their network by virtualizing network functions on cloud infrastructure with Network Function Virtualization (NFV) and Software Defined Networks (SDN). The result is a homogeneous infrastructure that utilizes standard high volume Intel servers that deliver TCO benefits as well as the cloud scale necessary for next generation of applications and services. The NFV deployments are already happening, and over the last three years we have seen strong commitment from CoSPs around the globe.

The smart applications enabled by 5G technology will utilize concepts such as network slicing to deliver high bandwidth and lower latency, with end-to-end Quality of Service (QoS), ranging from devices to network infrastructure and the cloud. The need for service assurance drives a few key requirements:

First, the infrastructure needs to be composable and scalable, so the compute resources such as CPU, memory, storage and networking can be dynamically assigned based on the needs of the application.

Second, these resources need to be configured, monitored and managed somewhat “automatically” based on well-defined policies and a Management and Orchestration (MANO) framework based on standard APIs also known as ‘closed loop automation’.

Third, the infrastructure needs to be secure and resilient. As multi-tenancy and service innovation on an open architecture becomes the order of the day, end-to-end security needs to become pervasive.

Intel and Dell EMC: The Pillars of 5G


5G represents a significant opportunity for the telecommunications industry to bring together these Digital transformation trends into a unified architecture leveraging the capabilities that Software-Defined Networking (SDN), Network Function Virtualization (NFV), and increasing amounts of automation and orchestration. The insatiable demand for ubiquitous, uniform connectivity and capacity, enabling services such as the Internet of Things, Connected Vehicles, mission-critical data services, and broadband everywhere, delivered with cloud economics, creates the perfect opportunity for 5G networks to leverage the new technology discussed above.


Dell EMC Study, Dell EMC Tutorials and Materials, Dell EMC Certifications, Dell EMC Online Exam
Source: 5G PPP

Intel and Dell EMC are aligned on the pillars of 5G networks, the need to embrace the three core technology shifts, and have an established relationship to research and develop solutions that help CoSPs rise to the challenge by:

1. Engaging strategically with CoSPs to define, prioritize, research, develop and bring to market infrastructure innovations that solve key challenges the industry is facing.

2. Developing a cloud optimized, 5G RAN infrastructure with a virtualized Radio Access Networks (vRAN) or Cloud RAN (CRAN), leveraging the benefits of pooled resources for baseband and packet processing.

3. Developing a joint infrastructure solution for Multi-Access Edge Computing (MEC) allowing intelligence at the edge with new applications such as Augment Reality (AR), Virtual Reality (VR), Analytics at the Edge, Edge Caching for Content Delivery Networks, IoT Gateways, and other low-latency applications.

4. Creating a joint lab environment where Intel and Dell EMC engineers can collaborate on researching, prototyping, and developing solutions.

5. Investing in applied research that helps the industry understand the benefits of infrastructure enablers embedded in general purpose computing platforms. This includes leveraging FPGAs for workload acceleration, NVMe technology, DPDK for efficient Data Plane processing, and Intel® Quick Assist Technology (QAT) to deliver end-to-end security, improving the performance and efficiency of workloads at the network edge.

In summary, the world is changing rapidly. The advent of “smart everything” and the intelligent end-to-end applications driven by 5G will require an infrastructure transformation – to an efficient, flexible and composable infrastructure. The Intel and Dell EMC partnership is delivering the solutions to enable this infrastructure transformation and enable this “smart everything” revolution.

Friday 23 August 2019

Beyond the Basic Transformation: How Business and Workforce Evolves

Workforce Transformation is a theme discussed across all organizations. Companies must transform with technology, even if technology is not what they do.

Even companies that have consistently demonstrated excellence in delivering technology-based solutions are at-risk, as the underlying architectures that they have built their businesses on change. The competitive pace of change creates internal pressure to adapt systems and processes. This often leads to unintended skill gaps. Many of these organizations feel like they are behind and cannot keep up.

Let’s consider Telecommunications Service Providers as one segment representative of this change. They provide the backbone of the Internet, and the critical access and mobile infrastructure so the rest of the world can continue on their transformation journey. All businesses today are built on the Internet and cloud technologies. Industries have embraced this mobility as a critical means of connecting to services, customers and partners. While 3G and 4G/LTE technologies were designed and used as an enabler of high-speed mobile data to the eventual smartphone and tablet, with the advent of 5G, vertical industries will look towards a new age of mobility as a foundation for their future success. To capitalize on this trend organizations must transform operational and organizational models to maximize the full potential of 5G. This is especially true with new opportunities at the edge.

Organizations must focus on aligning technology and organizational strategy. This will ensure they not only exist in the next 5 years, but also grow.

Dell EMC sees 4 pillars of Telecom Transformation:

1. Network Modernization
2. IT & BSS/OSS Transformation
3. Digital Growth & Transformation
4. Workforce Initiatives

To place this into context, it is worth considering what has shaped existing organizations. The scale, composition and structure of telecom organizations represents one of the defining features of the industry. This legacy has been shaped over time by diverse physical network functions, hierarchical systems management, regulatory & compliance restrictions and other industry specific issues. The telecommunications industry has continually endured massive technology shifts and adapted to new business models; however, the rate of change and the pace of disruption only continue to accelerate.

Stepping back and examining the larger picture reveals a multitude of technology disruptions taking shape simultaneously within the industry. Network virtualization, OSS & BSS modernization, real-time analytics and advanced telemetry have been underway for some time. To this, planners and strategists must add 5G, other radio technologies (such as WiFi 6 and CBRS), new IoT paradigms and further disaggregation of access and edge networks. Underpinning all these changes are the ever-present currents of openness and open source. Taken together, these present challenges to any organization striving to adapt and reinvent itself.

In particular, the widespread belief is that public cloud operating models (massively-scaled within centralized data centers) have solved the challenges facing the Telco Cloud. However, the industry continues to identify requirements at all layers – from facilities to infrastructure to skill sets to processes – that are unique. This learning is important – Public Cloud is not a “lift-and-shift” to Telco Cloud. Public cloud has solved the challenge of deploying tens of thousands of things at single-digit facilities – expanding those to hundreds of things at thousands of disparate facilities is a different problem space. Remote management, automation, orchestration, and operations are unique problems to Telco Cloud.

Furthermore, Public Cloud is built on standardization of a single resource building block. Standardized servers are made available in standardized racks, replicated across data center rows. Those rows are replicated across the data center. This homogeneous architecture meets the needs of the majority of tenants. The Telco Cloud, especially closer to the edge, is more heterogeneous, and the difficulty of reaching facilities requires that the right architectures and right capabilities are made available in as few iterations as possible.

With this in mind, implementing workforce programs designed to acquire new skills, change the culture and embrace innovation is critical for success. Returning to our themes of transformation, it is worth pointing out that the first 3 pillars all have in common the workforce consideration. This is pervasive throughout the entire company and as such, must be a top priority for the leadership team.

For example, traditional job roles may no longer align to business driven technology adoption. The ability to redefine roles and offer training programs designed for these new challenges should be leadership initiated. Today many organizations are focused on career skills that encompass web development, data science and analysis, advanced programming, cloud computing and API design, all within the construct of dev ops and agile methodology.

While this may seem at face value to be an internal set of challenges, the reality is that the problem statement can be recast to reflect a rapidly shifting external world that to some extent must be embraced, harnessed and brought within the organization in a meaningful way.

Dynamics at play between external and internal forces (see graphic) can be characterized as follows:

◈ New technologies, communities and ecosystems are driving an innovation wave throughout the industry.

◈ Maximizing this potential requires new models of interacting, adopting and embracing these currents of opportunity.

◈ A variety of traditional modes of operation can impede or create pressure on acquiring innovation.

◈ An implicit acceptance of mismatched operating models introduces paralysis.

Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Learning, Dell EMC Certifications

In other words, it’s not just technology for the sake of technology – it’s about operational excellence.

But that is where Dell Technologies comes in — we can help with this transformation from the inside-out. So where does this transformation start?

We will further explore how Dell Technologies can help Telecom Service Providers meet the challenges of transformation by focusing on people first. This is a journey, much like many others today, but the destination is the Digital Workforce. Having an employee base that thrives within a cloud first model will be the true engine for industry growth.

Thursday 22 August 2019

Get Your Head in the Clouds – Accelerate Your Business with Cloud Native Applications

Where does the IT department fit when application development shifts to a cloud native model? Does IT still matter? Do YOU still matter? How do you add value?

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Learning, Dell EMC Online Exam

In our digital-first world, enterprises of all sizes must move at startup speeds to outpace their competition. The need for speed can push application developers and IT to their limits. Traditional application development timelines are no longer a viable option; they leave organizations trailing behind. Those who deploy cloud native applications can meet customer and stakeholder expectations for speed and functionality faster than ever before.

It’s called “cloud native” development for a reason


Cloud native applications are applications developed to run in any environment and at any scale. They can run on public or private clouds, in traditional data centers, on intelligent edge devices, and with easy growth from a single laptop to operation across multiple data centers. They also do this faster, with greater scale and serviceability than traditional application models. Simple enough, right?

Developers predominantly build cloud native applications in containers – executable software packages that include both the application code and the infrastructure dependencies for running that code. And containerization is moving fast. According to Gartner, “by 2022, more than 75% of global organizations will be running containerized applications in production, which is a significant increase from fewer than 30% today.”

As Michael Dell has said: “The cloud isn’t a place, it’s a way of doing IT.” Accordingly, using a cloud native model doesn’t require you to develop and run your applications in a proprietary public cloud; it’s a style of development and deployment that supports the requirements of a hybrid cloud/multi-cloud way of doing business. Put simply: while containers don’t require a cloud-based approach, they provide an easy way to move to a cloud-based deployment model.

This microservices-based model requires the management of a large number of new containers rather than a small number of monolithic applications. To control them, you need a container orchestration tool to keep your container-based applications healthy and nimble. Things like application updates and simplified rollbacks are necessary for true production-ready platforms. While there are many different container orchestration systems, 83% of companies that use one report that they run Kubernetes. It holds a wide lead in market share over any competitor. If you’re unfamiliar with Kubernetes, our good friend Phippy from the Cloud Native Computing Foundation can help you understand it here.

Develop a strong partnership between IT, application developers and business leaders to accelerate cloud native applications


IT can ensure your environment is ready to shift more applications to a cloud native model. Armed with modern tools, they can anticipate and meet the needs of developers and business leaders before they try to solve problems on their own.

Developers: We often hear from cloud native application developers that they need flexibility to work on the platform of their choice (VMware, Pivotal, Red Hat, Google, Microsoft, etc.). They want to use the development tools and resources with which they have expertise. They also can’t wait for IT to stand up hardware for running new applications as they have in the past. They want a frictionless experience where they can stand up and tear down containers when they need to and without delay.

Business leaders: While business leaders need the speed and rapid update cycles that cloud native development processes deliver, they also want to avoid turbulence. That means protecting applications and data with production-ready solutions that deliver a seamless user experience at lower risk. Business leaders also want assurance before they invest in a new development model; they want to know they’re partnering with industry leaders who aren’t just experimenting with a fleeting technology trend.

How can Dell EMC help you realize your cloud native strategy?


As a platinum member of the Cloud Native Computing Foundation, Dell Technologies has a proven commitment to accelerating cloud native adoption. At Dell EMC, we leverage that partnership to build products that are more cloud native friendly. We also collaborate with a broad ecosystem of other industry leaders to help IT and developers move into the cloud native world.

With Kubernetes running on Dell EMC, you can accelerate your adoption of a cloud native approach in a way that is flexible, frictionless and safe.

◈ Flexible – Provide developers the most common cloud native platforms while you maintain control of cost and availability. How? Leverage the Kubernetes deployment option of your choice through Dell EMC platform integration with Pivotal, Google, Microsoft and IBM (Red Hat).

◈ Frictionless – Provision and deliver the underlying infrastructure services on demand to your containerized applications wherever they are deployed across your private clouds, public clouds and edge deployments. We provide tools that are deeply integrated and automated to deliver a consistent experience, allowing IT to move at the speed of developers.

◈ Safe – Deliver containerized apps with confidence in large-scale production environments through best-in-class data protection, security and support services. We also support a broad curated ecosystem of partners, providing dependable and durable support for your Kubernetes applications.

We can help you provide the most popular Kubernetes deployment options to your developers while maintaining control over the IT environment with fast and simple cloud native operations. We can do this all on proven infrastructure with resilient operations, to protect your organization and its valuable data.

Tuesday 20 August 2019

Tech Breakthroughs Don’t Happen in a Vacuum: Why Open Innovation is Critical in the Data Era

In 2003, Dr. Henry Chesbrough published a paper that challenged organizations to drive new technology breakthroughs outside of their own four walls, and in collaboration with customers and partners for an outside view. The approach, open innovation, follows a framework and process that encourages technologists to share ideas to solve actual challenges.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC Online Exam

I loved it. It was fast, yet pragmatic. It was conceptual, but grounded in real-world challenges that we could solve. The time and resources invested in innovation delivered better outcomes because it was developed with customers and partners.

For me, open innovation is core to how my teams have fostered new technology discoveries and patents that get realized in real-world use cases. It’s an archetype that has proven successful for Dell Technologies, notably as our customers look to modernize their IT infrastructure as part of their digital transformation.

Four tenets govern open innovation: collaboration, “open” in nature, rapid prototyping, and a clear path to commercialization. Our innovation teams have embraced this process, developing new solutions alongside our customers and partners based on the realities of the market landscape over the next three to five years. It’s a thoughtful blend of academic research, advanced internal technology, and developments from around the technology ecosystem.

Each engagement outlines problem statements and the many lessons learned from previous projects, and uses a number of internal and external resources from around the world to collaborate and ideate. Within a few short weeks, we develop and test prototypes and proofs-of-concept iterated in a real-world environment. This gives us the opportunity to learn critical lessons where we need to innovate around roadblocks, with a goal of designing a solution that’s incubated and integrated within 12-18 months, and primed to solve the challenges that lie ahead.

For example, we’ve worked with service providers to advance cloud-based storage container innovation designed specifically for IoT and mobile application strategies, laying the groundwork for an IT infrastructure that can evolve quickly to handle the volume of data that was then anticipated from 5G deployments and edge devices – happening today.

The scope of innovation projects underway today continues to focus on how we drive more value out of the exponential data resulting from more connected devices, systems, and services at the edge. IDC forecasts that by 2025, the global data-sphere will grow to 175 zettabytes, 175×1021 or 175 Billion 1TB drives. Dell Technologies Vice Chairman, Jeff Clarke, recently put that into context during the keynote at Dell Technologies World – that’s more than 13 Empire State building loaded with data top to bottom! Much of that will happen at the Edge. The Edge computing market is expected to grow 30% by 2022.

All of that data has the potential to drive better outcomes, processes and of course, new technology that could be the next major industry disruption and breakthrough. But the key word is potential – these are challenges that require innovation to not simply find a solution, but ensure that solution can be deployed and commercialized. Through the open innovation approach, we’re collaborating with customers and partners to meet the new demands of the “Data Era,” and ensuring that ALL the data, wherever it lives, is being preserved, mobilized, analyzed and activated to ultimately, deliver intelligent insights.

Open innovation enables us to be pioneers in software-defined solutions and systems that can scale to manage the influx of data and ensure they evolve with new software and application updates – and unlock our customers’ data capital.

For instance, we’re working with the world’s largest auto manufacturers to build their edge infrastructures and data management capabilities to support huge fleets of autonomous cars! Through innovation sprints and collaboration, we’ve been able to understand what’s required for data to work in real-time at the vehicle level, driving intelligence and automation through AI/ML, while also ensuring data management in the cloud and data center is equipped to handle Zettabytes of data. It’s our view that the infrastructure powering the future of smart mobility will be the first private ZetaScale systems in the world, and Dell is part of the core journey to make that a reality.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC Online Exam
We’ve partnered with customers in retail to develop intelligent software-defined storage solutions that support integrated artificial intelligence (AI) and machine learning (ML). This automates software updates, which can often zap productivity from IT teams. Using software-based storage offerings provisioned through automation, IT teams can now develop data-driven business applications that deliver better customer experiences.

We’re also continuing our work with service providers and enterprises to build the edge infrastructure required for 5G. For example, we’re working with Orange on specific solutions that look at how AI/ML can manage edge environments. At the same time, we’re helping service providers evolve their multi-cloud strategy so they can seamlessly manage and operate a variety of clouds that exist in public cloud domains, on-premises for faster access and stronger security, and clouds at the edge that enable them to manage data in the moment.

In my experience, innovation with “open” collaborative frameworks and processes delivers pragmatic yet incredibly meaningful fast innovation across any industry. You can’t advance human progress through technology if it can’t get into the market to deliver real leading-edge solutions to problems not previously solved. The single biggest challenge in front of our customers is the risk of being disrupted by a digital version of their business that can better exploit technology innovation. And that’s why our aim is to partner with our customers to innovate at speed through open innovation – ensuring our customers can be the disrupters – not the disrupted.

Saturday 17 August 2019

Powering the Human-Machine Workforce in the Data Era

175 zettabytes, 175×1021 or 175 Billion 1TB drives …or put another way, more than 13 Empire State buildings. For the non-engineers out there – that is the amount of data IDC forecasts will exist by 2025

It’s becoming widely understood that data is an organization’s greatest resource – their data capital. And it’s that data that when applied to analytics, artificial intelligence and machine learning, can give lead to new applications, solutions and services that can drive a better business and a better experience for your customers.

Making modernization a reality


It’s also becoming widely understood that organizations need to modernize their IT strategy. I recently spoke to this at Dell Technologies World and the five imperatives to making modernization a reality in the Data Era. The first four are largely grounded in IT infrastructure:

1. An infrastructure that can take on structured and unstructured data workloads for AI and Machine Learning

2. A hybrid cloud strategy that includes both public AND private clouds

3. An edge strategy that ensures you can support the data being generated across devices, apps and systems with the right compute, real-time analytics, storage, data protection and security; and,

4. A software-defined data center that ensures all those racks can move and manage data in an intelligent and automated way – and rapidly evolve and scale as data management needs change – which in this day in age – can be in the blink of an eye.

The fifth is not only an imperative for modernization – it’s critical to how your business will pay off all those insights and deliver on new innovation:

Humanity at the center of transformation


There are now FIVE generations spanning your workforce – the latest entrants being Gen Z. They work differently than Millennials, Gen X and Y – and certainly from Baby Boomers, like me.

However, they are all deeply connected by the need for innovation and technology that gives them the power to be productive, creative and connected – anywhere in the world, any time of day.

Dell Technologies recently collaborated with the Institute of the Future and more than 50 global futurologists, as well as 4,600 business leaders around the world, to better understand and forecast the technology shifts and trends coming in the next decade. Among those trends is understanding the evolving dynamics of the global workforce – and the innovation we need to develop today along the way to 2030.

For starters, people want amazing technology at their fingertips to simply get work done. They need systems that become intelligent, personalized and POWERFUL. And, they need to have a killer design – our devices are an extension of us – we want them to look as polished and intelligent as they are on the inside.


That technology also needs to enable collaboration with colleagues and creativity in new and compelling ways. That’s where augmented reality and virtual reality come in – giving way to new ways for all generations to learn new skills out in the field, create and design in simulated environments, and collaborate with colleagues thousands of miles away – yet interact as though they’re in the same space.

The work that Glen Robson and our innovation teams have underway in our Client Solutions Group continues to push the boundaries of PC, gaming and design innovation for today and the next decade.

The human-machine partnership at work


Further, people and machines must be able to effectively collaborate and work together. AI and machine learning can drive insights and automation that lighten rote tasks for humans, and free up their time to accelerate the development of new services, technologies and innovation born from the influx of data at hand. That’s an important thread to tie back into my first four imperatives discussed above – without an IT infrastructure that can transform into an intelligent business partner, it’s increasingly difficult for people to apply their talents and resources to better outcomes. That’s why driving innovation forward in cloud, edge, and data center solutions is all John Roese and team think about in our ISG business.

But like machines, people will also require AI fluency – one of the three major shifts expected to come as we head into 2030 according to our research. AI fluency means overtime, we need to expand the breadth of knowledge on AI technologies and capabilities – from elements as basic as understanding software code to broader technical analytical skills. That training will happen in the workplace and will also require a fundamental shift in how we prepare our students for the workforce.


But there’s also a more “human” element to working and collaborating with AI that will need to evolve – and it’s understanding just how to work and collaborate with AI in a social and emotional way. We’ll need to know what AI is capable of and where its strengths digress – and where its strengths should be tempered to ensure human intuition and experience isn’t lost. AI will certainly make incredibly fast decisions and assessments, but not all decisions will be black and white, or fact-based. There’s compassion, judgment – the all-important “gut” instinct that many of us have – and my gut tells me that’s always going to have a place.

So what’s next?


As we head into the next decade, there’s a lot of anticipation and of course trepidation from some on how AI and humans will co-exist. I’m nothing but optimistic. I believe technology is a force for good and will continue to drive human progress. The innovation underway across Dell Technologies will continue to transform the power of data into intelligent solutions and applications that give mankind the ability to be smarter, more strategic in our work – and deliver on the promise that “data capital” holds.

Hitting the Accelerator with GPUs

As organizations work to meet the performance demands of new data-intensive workloads, accelerated computing is gaining momentum in mainstream data centers.

As data center operators struggle to stay ahead of a growing deluge of data while supporting new data-intensive applications, there’s a growing demand for systems that incorporate graphics processing units (GPUs). These accelerators, which complement the capabilities of CPUs at the heart of the system, use parallel processing to churn through large volumes of data at blazingly fast speeds.

For years, organizations have used accelerators to rev up graphically intensive applications, such as those for visualization, modeling and simulation. But today, the use cases for accelerators are growing far beyond the typical acceleration targets to more mainstream applications. In particular, accelerators can now be one of the keys to speeding up artificial intelligence, machine learning and deep learning applications, including both training and inferencing workloads, along with applications for predictive analytics, real-time analysis of data streaming in from the Internet of Things (IoT), and more. In fact, NVIDIA® GPUs can accelerate ~600 applications.

So what’s the big advantage of an accelerator? Here’s a quick look at how GPUs rev up performance by working with the CPU to take a divide-and-conquer approach to get results faster: A GPU typically has thousands of cores designed for efficient execution of mathematical functions. Portions of a workload are offloaded from the CPU to the GPU, while the remainder of the code runs on the CPU, improving overall application performance.


Accelerated servers from Dell EMC


If you’re just getting started with AI, machine or deep learning, or if you just want a screaming fast 2-CPU Tower server, check out the PowerEdge T640. You can put up to 4 GPUs inside this powerhouse, and it fits right under your desk! It has plenty of internal storage capacity with up to 32x 2.5” hard drives, and you can connect all your tech with those 8 PCIe slots. (I’m thinking that something like this could improve team online gaming performance.)

Beyond my personal wish-list, the PowerEdge T640 is rackable, and you can virtualize and share those GPUs with VMware vSphere, or with Bitfusion software to provide boosted virtual desktop infrastructure (VDI), artificial intelligence and/or other testing and development workspaces.

For those serious about databases and data analytics, check out the PowerEdge R940xa. The PowerEdge R940xa server combines 4 CPUs with 4 GPUs for a 1:1 ration to drive database acceleration. With up to 6TB of memory, this server is a beast that can grow with your data.

If you’re not sure what to pick, you can’t go wrong with the popular PowerEdge R740 (Intel) or PowerEdge R7425 (AMD) with 2 CPUs and up to 3 heavy-weight NVIDIA® V100 GPUs or 6 lightweight NVIDIA® T4 GPUs. In the PowerEdge R7425, you can make great use of all those x16 lanes. Why look at lanes? The vast majority of GPUs plug into a PCIe slot inside a server. PCIe communication/information is carried over the lanes in packets. More lanes means more data packets can travel at the same time, and the result can be faster.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Online Exam

Dive into the good-better-best options by workload in this new Dell EMC and NVIDIA ebook.

Dell EMC Study Materials, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Online Exam

In addition to PowerEdge servers, Dell EMC has a growing portfolio of accelerated solutions. At a glance, Dell EMC Ready Solutions for HPC and AI make it faster and simpler for your organization to adopt high-performance computing. They offer a choice of flexible and scalable high performance computing solutions, with servers, networking, storage, solutions and services optimized together for use cases in a variety of industries.

The big point here is that your organization now has ready access to the accelerated-optimized and pre-integrated solutions you need to power your most data-intensive and performance-hungry workloads. So let’s get started!

Friday 16 August 2019

The Power to Probe the Secrets of the Universe

Understanding our universe


Particle physicists study subatomic particles such as quarks, electrons and neutrinos that serve as the building blocks of our universe. Understanding the structure and interactions among elementary particles and fields — at the smallest distance scale and highest attainable energy — can help unlock the secrets of the universe.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Online Exam, Dell EMC Certifications

For many years, the Dutch National Institute for Subatomic Physics (Nikhef) has conducted research into the universe’s elemental building blocks, their mutual forces and the structure of space and time. Nikhef’s work involving the Hadron Collider at the CERN accelerator complex in Geneva, Switzerland, helped confirm the existence of the Higgs boson, one of the particles thought to play a key role in all physical forces.

Not surprisingly, exceptional computing power and data analysis capacity and speed are essential for Nikhef.

“Every year, we receive dozens of petabytes of raw data from various scientific research institutes,” explains Tristan Suerink, IT architect at Nikhef. “It’s only through many calculations that we can understand the data. All of these must be processed and shared with thousands of researchers around the world.”

Never-ending need for power


None of the interactions Nikhef studies are visible to the naked eye. The work is almost incomprehensible to anyone outside of physics. To accomplish its mission, Nikhef’s scientists constantly need more high-performance computing (HPC) power to process the exponential increases in raw data.

Today, the organization processes 10 petabytes of raw data annually. By 2020, that will double — and it will multiply 5x by 2025.

“We are always looking for as much computing power as possible for our budget. Memory, network, calculation capacity and storage are the most important for us.”

— Tristan Suerink, IT Architect, Nikhef

To meet its requirements, Nikhef recently expanded its IT infrastructure with 93 Dell EMC PowerEdge R6415 servers, featuring AMD EPYC™ 7551P processors.

“The design of the Dell EMC PowerEdge R6415 met both of our selection criteria: CPU performance and the high I/O bandwidth available,” notes Suerink. “Moreover, we want CPUs and servers that guarantee four years of solid performance with full deployment.”

Greater processing efficiency


Because Nikhef’s work is so critical, the organization extensively tested demo units of the PowerEdge servers before putting them into production. A single server running on the EPYC architecture and 32 cores offered the best price-performance ratio — essential for an organization that’s trying to maximize the value of every euro spent.

By choosing a single-socket design, Nikhef ensured that the servers’ entire processing capacity could be tapped for data analysis. This makes the solution 20 percent more efficient than any dual-socket system.

“More effective computing power with Dell EMC means that scientists can analyze data more quickly and ultimately do more research.”

— Tristan Suerink, IT Architect, Nikhef

Increased returns fund more research


Particle physics is reshaping the way we view our universe. In addition, it’s had a far-reaching impact on other fields ranging from medicine and computing to sophisticated security applications such as advanced scanning techniques and nuclear inspection and oversight.

As a research organization, funding for Nikhef is constantly a challenge. With Dell EMC PowerEdge servers, Nikhef is maximizing its return on investment.

“Our Dell EMC servers are reliable, so users can work longer and do more calculations. It’s the reliability, uptime and quality of delivery that are important to us.”

— Tristan Suerink, IT Architect, Nikhef

Wednesday 14 August 2019

Dell EMC VxRail Just Derailed the Competition

Dell EMC VxRail, Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Learning

No matter when you hopped on the hyperconverged infrastructure (HCI) train, one thing is certain – it hasn’t slowed down. The rate and pace of growth in this space is picking up steam as organizations continue to look for ways to modernize their data center, cut infrastructure expenses and drive innovation.

Mind the gap


The gap between Dell EMC’s HCI systems portfolio and the rest of the industry is widening. IDC recently released its Q1 2019 Worldwide Converged Systems Tracker, and once again, Dell EMC is No. 1 in HCI system sales with 32% share. We’ve held this position for the last eight consecutive quarters. That’s right—two years! According to IDC’s research, we’ve experienced 64% revenue growth year over year, which is out pacing the rest of the industry. What’s more, is that VMware is doing the same as No. 1  in the HCI software category with 41% share.

The common denominator? Dell EMC VxRail. And VxRail is flying by our competitors like a runaway train. In this same IDC tracker, Dell EMC VxRail has taken over the No, 1 spot in hyperconverged systems!

We could not have gotten here without our thousands of customers, who trust us to help solve their business challenges. We hear from our customers every day how VxRail plays an instrumental role in making your business work better. Here are a few things you told us you like most about VxRail:

◈ Configurability – VxRail is the foundation for data center modernization plus a whole lot more. You can do just about anything with VxRail wherever you are in your IT journey.

◈ Integrated lifecycle management – VMware Cloud Foundation on VxRail makes operating the data center fundamentally simpler by bringing the ease and automation of the public cloud in-house by deploying a standardized and validated network flexible architecture with built in lifecycle automation for the entire cloud infrastructure stack including hardware.

◈ Serviceability – With VxRail, you get secure remote support, proactive monitoring and parts replacement and built-in knowledge articles.

◈ Support – You have a single point of support for both the hardware and software with a 95% CSAT score.

Dell EMC VxRail, Dell EMC Study Materials, Dell EMC Tutorials and Materials, Dell EMC Learning

On behalf of our entire VxRail family, thank you!

We are beyond excited about this latest news. Since the inception of VxRail just a few short years ago, we have prided ourselves in our ability to help our customers innovate their business faster and easier, across core data centers, at the edge and public clouds.

Our fuel is innovation – taking you to your destination of choice


While some others have started to decelerate, we’re still shoveling more coal on the fire. VxRail, with over a $1B annual run rate, is still the fastest growing HCI system among top 3 product brands. And we don’t plan on putting on the brakes anytime soon as we continue to advance our HCI systems portfolio with enhancements like machine learning based monitoring and analysis with VxRail Analytical Consulting Engine (ACE) that we announced at Dell Technologies World a couple of months ago.

Plus, we are further increasing our leadership position by continuing the innovation behind VxRail, jointly with VMware, to be better together. In core data centers, the majority of our VxRail customers run enterprise applications powered by vSAN. At the edge, we offer a wide range of solutions depending on what the edge means to you. This spans easily managed end-to-end VDI solutions using VxRail and VMware Horizon 7 to VMware Cloud on Dell EMC, which offers a data center-as-a-service at the edge or the core data center. And, for connecting with public clouds, VxRail is the ideal hybrid cloud infrastructure foundation for the Dell Technologies Cloud Platforms with VMware Cloud Foundation on VxRail. This is a game changer! VxRail is the only jointly engineered HCI system with VMware Cloud Foundation offering full stack integration and automated SDDC infrastructure deployment. You have a one stop shop with Dell Technologies.

Hop on board


If you have been thinking about hyperconverged but haven’t yet punched your ticket…there is no better time than now to grab a seat! The No. 1 Dell EMC VxRail is built on principles that deliver turnkey simplicity, automated end-to-end lifecycle management, and a highly differentiated experience for the fastest and simplest path to IT outcomes.

So, whether you’re looking to modernize your data center, deploy a hybrid cloud environment, innovate at the edge or looking to accelerate your application transformation, hop on board, get comfortable and enjoy your transformation ride with VxRail.

Tuesday 13 August 2019

Making the Most of the Multi-Cloud Advantage

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Online Exam

Driven by innovation, the ‘digital age’ is by its very nature in constant flux. New technologies are emerging and evolving all the time, and are being combined together to unleash the potential for even more intelligent and more efficient ways of working. To optimally adapt to these changes, today’s decision-makers must stay open-minded about altering their business strategies to keep pace with the latest methods for smarter working, better ROI, an improved customer experience and greater employee efficiency. This need for adaptation is illustrated particularly starkly by the advances organizations can make by reconsidering the cloud strategy.

Connecting the front and back end through a multi-cloud vision


Times are changing, and so too is technology. The adoption of redefined cloud strategies can have an enormous impact on creating a real-time two-way dialog between the customer and the company, while also driving intelligent decision-making based on up-to-the-minute data insights to boost the accomplishment of both internal and external goals. How IT has transformed itself from a simple service provider into a trusted business partner, thus opening new doors. IT has created a solid foundation, enabling the benefits of these new technologies to be combined in order to establish new ways of connecting people – not only customers with companies, but also colleagues with each other, thus breaking down departmental silos to generate a single corporate vision based upon a common goal. In today’s market, a multi-cloud strategy is an essential tool for connecting these dots.

Let’s pause to reflect on the adoption of the cloud over time. At first it was an attractive alternative, with cloud-compute resources freely available on demand, metered and easily charged back to the departments using them. Therefore, many departments embraced it, both inside and outside IT. All of this led to what we called a ‘cloud-first’ approach. But since then, the IT landscape has changed; multiple workloads are now on multiple clouds, privacy regulations have tightened due to Europe’s GDPR, and new and innovative technologies enable us to exploit even more efficiencies – but only if the C-suite can agree on a strategy.

Adopting a cloud strategy that combines hybrid and multi-cloud potential has the power to offer organizations multiple advantages, such as designating the optimum cloud solution for each workload, ensuring data security and making regulatory compliance easier. Furthermore, distributing data across both hybrid and multi-cloud setups increases the ability of organizations to share, back up and protect their information across multiple sites.

The modern CIO must have the vision to recognize that a solid, well-designed multi-cloud infrastructure provides the necessary flexibility to drive change, reach customers in new and unexpected ways, and help employees and departments to share information seamlessly – from edge to core to cloud. In other words, the potential of IT is no longer hidden away at the back end. It is now up front and central to a smart business model within any company looking to enhance the customer experience and improve collaboration – and that is why IT should be high up on every C-suite member’s agenda. Today’s businesses are recognizing the advantages to be gained from shifting their data to where it does the most good, and a solid multi-cloud strategy is the catalyst for this change.

Changing times, changing strategies


To get a clearer picture of the current needs of today’s companies, we commissioned ESG to conduct a Cloud Economics Research study to see where the market is heading. The research – among over 200 organizations throughout the UK, Germany and France – was aimed at determining the extent to which public-cloud cost benefits are being realized and whether organizations are considering the repatriation of workloads.

As expected, we found that cloud strategies have evolved in line with the demands placed upon changing IT structures over time due to the increased burden of the needs of emerging technologies. While there was considerable interest in the public cloud in 2014, that trend is now leveling off and shifting back towards a more hybrid approach. Every customer is now in multi-cloud mode. We have moved past the cloud-first wave, with repatriation happening at a faster rate in EMEA than in North America due to the influence of GDPR and the increased emphasis placed on data sovereignty. The ESG survey findings include:

◈ The majority (54%) of public cloud users say some (51% on average) cloud-resident workloads cost more than they did to run on-premise

◈ The majority (58%) of public cloud users say some (54% on average) cloud-resident workloads cost more than expected

◈ The majority (56%) of public cloud users expect to repatriate cloud-resident workloads (typically 25%-50%) in the next 12 months

◈ 50% of respondents have a significant hybrid cloud management problem, outnumbering those without challenges by more than 6:1

◈ Environments becoming more complex outnumber those becoming simpler by 7:1

Customizing cloud models for maximum impact


Just as every organization must have its own specific business model, each company’s cloud strategy must match the needs of its specific workload. History has taught us that, for maximum results, the C-suite members need to think of cloud strategies as an ever-evolving model rather than a simple one-off solution. Today’s most successful organizations are on a multi-cloud journey which comprises both a hybrid cloud path and a native cloud path existing simultaneously, each offering its own set of benefits and rewards. An on-premises solution is considered to have the upper hand in risk and monitoring, while off-premises solutions boast advantages in terms of manageability, ease of procurement and cost. For this reason, every company must decide where its priorities lie and design a cloud strategy to best suit its needs for connecting a multi-cloud approach to the front end.

Taking control of your multi-cloud environment opens up vast possibilities for improving your business model, including:

◈ Lowering TCO with cloud economics

◈ Increasing business agility

◈ Accelerating time to market

◈ Reducing business risk

The adoption of redefined cloud strategies can have an enormous impact on enhancing the customer experience and driving intelligent decision-making based on real-time data insights to boost the accomplishment of both internal and external goals. Today’s decision-makers need to keep in mind that a cloud strategy is about more than simply where to place applications and servers; cloud solutions connect organizations with their partners, their customers and their own employees to allow tomorrow’s innovations to thrive.

Saturday 10 August 2019

Dell EMC Unity XT Rises to the Top in Independent Storage Performance and Efficiency Testing

The success or failure of many storage solutions is often determined by an array’s performance across a variety of workloads. A modern storage array must concurrently deliver performance, data reduction and data services, enabling organizations to deliver increased efficiencies and thrive in today’s complex application environments. The Dell EMC Unity product line has already delivered on these capabilities by increasing data reduction rates by ~80% since initial release (December 2016). And now with the recently refreshed Unity XT, we’re raising the bar yet again.

In order to quantify all of the enhancements we’ve made with Unity XT, we asked Principled Technologies (PT) to independently conduct hands-on performance testing, with and without data reduction, between Unity XT and its primary competitor we’ll call Vendor A. This testing confirms that Unity XT, with its modern architecture and hardware enhancements, beats a leading storage competitor in three different performance and data reduction scenarios.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certifications, Dell EMC Online Exam

Scenario 1: Performance with Data Reduction Turned On


In this test, data reduction (compression and deduplication) was activated on both systems to maximize storage efficiency and space. This is especially significant when working with virtual servers, file system data, archival and backup data or email systems containing multiple instances of the same file attachment. While Unity XT data reduction is always inline to support data-intensive workloads, at a certain threshold Vendor A disables inline deduplication. With data reduction turned on, Unity XT was 24% faster at 8k IO block size/100% Reads and 67% faster at 32k IO block size/100% Reads than Vendor A. If you’re looking for an analysis of workload performance based on a more common scenario, Unity XT also beat out Vendor A by 20% in a 70:30 R/W mix.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certifications, Dell EMC Online Exam

Scenario 2: Performance with Data Reduction Turned Off


In this test (8k IO block size and 100% Read), with data reduction turned OFF on both systems, Unity XT’s raw performance for data-intensive workloads was 93% better than Vendor A. Unity XT also proved to be 47% faster at 32k block size and 100% Reads. While most of our customers would enable data reduction, there are primary datasets that are not necessarily impacted by array-based data reduction such as music/audio, photo, and video files along with certain types of Big Data like telemetry and genomic files, which all tend to be compressed by default in software. Therefore, this test is an illustration of the overall raw performance of each system’s ability to handle these data types.

Scenario 3 Results: Data Reduction Under Same Performance Load/Data Set


In this test, both arrays were placed under the same load at 70K IOPS, 8k IO block size and 100% Writes. The goal of this test was to isolate data reduction efficiency and keep all other variables equal. Unity XT 880F’s data reduction rate came in at 7:1 or 129% better than Vendor A’s 3.05:1 rate. During the 3-hour pre-fill process, Vendor A’s system seemed to halt its inline data reduction process to preserve IOPS.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certifications, Dell EMC Online Exam

No compromise midrange storage


Unity XT’s powerful capabilities allow it to run virtualized applications, support inline data reduction and deliver unified data services – simultaneously. Our customers have long told us they value Unity for many reasons, including:

◈ Storage lifecycle simplicity from ordering to support

◈ Unified architecture (file and block)

◈ Dual-active controllers

◈ Investment protection with the Future Proof Loyalty Program

Now that the results are in, you can add these significant performance and data reduction advantages to the long list of reasons why customers choose the Dell EMC Unity family. With the storage system from Vendor A, customers will have to choose between performance OR efficiency. With Unity XT systems, our customers get both – without compromise.

Thursday 8 August 2019

Dell EMC DLm 5.1 Moves Tape and Disk Data Synchronously Into the Future

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Materials

Dell EMC has been a leader in mainframe storage for nearly 30 years, starting with the introduction of the first Symmetrix disk array. Our legacy of innovation and market leadership continues today at SHARE 2019 (booth #201) as we announce DLm release 5.1 for synchronous tape replication between data centers and Universal Data ConsistencyTM between tape & disk using our industry-leading, award-winning all-flash, PowerMax 8000.

Additionally, we’re enhancing long-term retention of tape using cloud-based physical tape replacement powered by Dell EMC Cloud Object Storage (ECS), which ensures that your physical (or virtual) tape replacement requirements are exceeded, regardless of whether you need to stretch your tape to cover newer requirements, commit data to the cloud or simply,  finally, say goodbye to cumbersome, complex physical tape.

Simplifying Cloud Data Migration and Restoration


Dell EMC recognizes that your mainframe storage strategy must include leveraging your organization’s cloud infrastructure to reduce costs. Ideally, this is done using capacity within an existing private cloud infrastructure and offloading physical tape or virtual tape data classified as requiring long-term retention.  In the years since DLm first wrote to the cloud, we have continued to simplify and expand users’ options for writing to, and, restoring data from private clouds. Today, DLM 5.1 makes cloud storage with Dell EMC ECS even easier by adding command options that enable moving data automatically or on demand to the cloud. A new, simple, single restore command can now recall data from ECS.

Solving Data Consistency and Synchronous Tape Problems


Integral to most mission-critical applications, even basic Hierarchical Storage Management (HSM), is the notion that certain data has interdependencies that must be maintained to ensure data integrity. This is especially critical in recovery from a disaster event or even a recall of a dataset migrated to virtual tape! When data that depends on each other is not synchronized, it can result in lost data, and in the worst case, the failure of an application, requiring a lengthy recovery process. Consider HSM’s control data sets, the tape catalog, and tape data; physically separate structures, but, if they are not replicated in a consistent manner, as could be the case when independent remote replication mechanisms are deployed, it creates an out of sync condition between them that can cause HSM dataset recalls to fail, triggering a potentially lengthy process to restore data.

Dell EMC recognized this problem years ago and created an ecosystem called Universal Data ConsistencyTM to ensure precise consistency between meta data and datasets written and replicated to multiple storage locations. With the introduction of the PowerMax 8000 attachment to DLm, customers can leverage Universal Data Consistency using all-flash storage. In a recovery situation, time is always critical, as is data consistency. PowerMax’s “end-to-end” NVME architecture combined with Universal Data Consistency can provide a significant assist in performance when the data is not cached, as is the case of a recovery situation.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Materials

Another often overlooked consideration in using tape for data protection is the currency of the tape datasets between production and DR sites. When data is 100% in sync between primary and DR, your recovery Point objective = zero; that is, you can recover the exact point in time of your data (transactions) from your DR site. DLm R5.1 leverages PowerMax’s Symmetrix Remote Data Facility in synchronous mode (SRDF/S) for the ultimate in tape data synchronization. With PowerMax 8000, DLm can assure host applications that tape data is truly in sync with the disk data since the primary (production DLm) PowerMax array waits until the data is committed by the secondary site PowerMax before the next write is accepted by the host, ensuring that the replicated copy of the data is always as current as the primary.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Materials

Are You Constantly Trying to Stretch Your Tape Investment?


Since its introduction more than a decade ago, DLm continues to be designed for a multiplicity of environments, helping to leverage your investment in more than just tape replacement. Unlike other virtual tape systems, DLm’s multi-tenancy and shared storage capability enable your organization to use virtual tape to satisfy the simultaneous needs of multiple departments’ demand for their own “unique” tape, with their own set of specifications like Recovery Point Objective / Recovery Time Objective or tape addressing. Imagine installing one DLm8500 system and having several business units think that they have their own, unique dedicated system built just for their needs! Need to put a DLm into your organization’s rack or repurpose an existing rack? Release 5.1 makes that possible as well.

Are the days of the mainframe silo, where every piece of storage equipment was dedicated to mainframe applications, long gone? Many organizations today are requiring (or at least asking) that storage capacity for mainframe data be “shared” with distributed systems capacity. Dell EMC recognized this trend nearly a decade ago, making it simple and easy to leverage Dell EMC PowerMax 8000, Data Domain or ECS cloud storage across both mainframe and distributed environments to better maximize your storage investment.

If you’re at SHARE this week, please visit booth #201 to talk with our experts about our comprehensive set of mainframe storage solutions (DLm virtual tape, PowerMax primary storage, Connectix FICON directors and Automated failover of disk and tape using Dell EMC GDDR).