Sunday, 29 August 2021

Experience the Power of APEX Data Storage Services

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Preparation, Dell EMC Career, Dell EMC Materials

APEX Data Storage Services is proven to drive immense value for our customers and Dell Technologies is now making it even easier for you to get started.

APEX Data Storage Services Online Pricing Calculator

A key tenet of APEX Data Storage Services is transparent pricing. As procurement evolves and buying cycles compress, it’s imperative that you can quickly get an accurate price estimate for your purchases. To help you do this, Dell Technologies recently launched the APEX Data Storage Services Pricing Calculator.

With the simple to use calculator, just select the four key service parameters (data service, performance tier, base capacity and subscription length) and immediately see the estimated base cost of your service. Then easily compare prices as you adjust the parameters of the offer.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Preparation, Dell EMC Career, Dell EMC Materials

Promotional Offers

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Preparation, Dell EMC Career, Dell EMC Materials

The early feedback around APEX Data Storage Services has been fantastic. To help you with the transition to APEX Data Storage Services, Dell Technologies is excited to introduce a family of APEX promotions available to customers. The first two are below. There will be more to come, so watch out for upcoming announcements.

◉ 90-Day, Money Back Guarantee: You can now experience the simplicity and agility of APEX Data Storage Services knowing that Dell has your back. With this promotion, you can cancel your subscription anytime, for any reason during the first 90-days of your initial subscription and receive a full refund. Yes, we are that confident that you will love the service.

◉ Migration Credits: Experience the power of APEX Data Storage Services by reducing the costs and complexities of migration. If you sign up for APEX Data Storage Services and concurrently sign up for Data Migration, then you will receive credits to help offset the cost of your data migration.

It’s never been easier to get started with APEX Data Storage Services. To learn more visit the APEX Data Storage Services page.

Source: delltechnologies.com

Saturday, 28 August 2021

Increasing Development Velocity with Self-Service IT

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Certification, Dell EMC Career, Dell EMC Guides, Dell EMC Learning

In IT, it’s understood that users, particularly developers, don’t like to wait to get the infrastructure they need. Dell Digital, Dell’s IT organization, has an internal cloud service portfolio to ensure they don’t have to.

We call it our Dell Digital Cloud—a centralized self-service, marketplace for internal business partners, stakeholders and some 9,000 developers to consume our infrastructure services. In a step towards self-service nirvana, it lets developers and infrastructure engineers alike get what they want, when they want it, without IT getting in the way.

Dell Digital Cloud runs on Dell’s hyperconverged infrastructure (HCI), namely VxRail and PowerFlex, at the heart of our modern data center. Users can provision services—virtual machines (VMs), containers, data services, databases and more—in less than an hour compared with waiting weeks via a typical IT provisioning process.

Replacing tickets with on-demand

Traditionally, someone would request an IT service through a queue-based ticketing system like ServiceNow and that request would go into the system. It would be vetted and approved, and then move through the system with queue wait times and people doing their work in a very waterfall approach.

With the cloud portal, Dell Digital no longer needs to curate what users can have or when they can have it. Instead, we curate the services on the back-end and create strong standards that tie the service catalog into our security, compliance, governance, monitoring, reporting, etc. We still have all the rigor around what we’re doing.

Available to everyone at Dell, portal users are primarily developers and an occasional product team. It is used across business groups, including product groups, cybersecurity and team member experience.

Getting IT out of the way

Besides the fact that developers and other IT users have, of course, always wanted access to what they need without waiting, there were several other reasons we decided to create the portal.

One is that building out our private cloud environment over the past three years using Dell and VMware technology has resulted in infrastructure that lowered our cost to serve, reduced provisioning time and increased agility. We’ve automated many processes, providing users on-demand access to platforms, databases, compute, storage and networking via self-service.

Dell Digital saw a need to make a public cloud-like experience for our developers and our internal business partners and that’s what we did. By creating stronger standards in this catalog that people are consuming, it mimics a public-cloud experience and a cost structure that is highly competitive, which is a common expectation with today’s workforce.

If a user still needs services on a public cloud service and they have a viable business reason, we can assist in brokering those services using Dell-approved top-level accounts and security parameters. By offering users a multi-cloud approach, Dell Digital has visibility into what users are consuming. We can also ensure adherence to our security policies around public cloud usage. If user demand for a particular service in the public cloud reaches a pivotal threshold, we often add those services internally, repatriating those workloads to the Dell Digital Cloud to take advantage of our substantial cost and performance benefits.

We provide portal users with show back on the cost of the services they provision, along with the ability to “try before they buy” services for 30 days.

We also have a team dedicated to working with our stakeholders, offering consulting on how to onboard into the portal. If users have a traditional legacy application and want to start consuming some of these services to gain efficiency and higher performance, we help them find the best way to do that. 

An API marketplace

We operate in a product model approach to add and iterate on features, based on feedback from developers and other stakeholders. While the portal is often the focal point of the conversation, all the back-end services that the portal uses are APIs. They are available through the Dell Digital-wide API marketplace, where our public and private APIs are cataloged and managed across Dell Digital.

Many of the workflows are directly accessed with APIs from CI/CD pipelines and other key software initiatives and the experience is much more streamlined for our developers and engineers.

All APIs in the marketplace are vetted via our CI/CD development pipeline, where they are validated and tested before being published.

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Certification, Dell EMC Career, Dell EMC Guides, Dell EMC Learning

We are continuing to expand our catalog to gain self-service efficiency across our IT infrastructure, from the basics like VM and container provisioning to day-two operations including how we secure, grow, manage and maintain the services.

As a company that shows our customers how HCI can help them gain flexibility, automation and standardization to improve business velocity, our internal Dell Digital Cloud is proof of practicing what we preach to remove IT as a barrier to services.

Source: delltechnologies.com

Thursday, 26 August 2021

They Laughed When We Said Hybrid

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Preparation, Dell EMC Career, Dell EMC Study

Central Texas is a long way from Wall Street and from Silicon Valley – about 1700 miles from both. And being headquartered here, sitting smack in the middle, has helped Dell Technologies to defy the conventional wisdom of those markets. Why is that a good thing? Because conventional “wisdom” is rarely wise, and convention actually stifles innovation. Challenging it is part of Dell’s culture.

When conventional wisdom isn’t wise

I have had the good fortune of a front row seat to the historical intersection of cloud, conventional wisdom and Dell Technologies. It has given me a unique view of where convention faltered and blew some common predictions.

In 2008, Nicolas Carr wrote the influential book, “The Big Switch; The Definitive Guide to the Cloud Computing Revolution.” He famously offered an analogy for IT, likening it to the history of electricity and electricity generation. Discussing the significance of the shift to cloud computing, he explained that in the beginning, companies and communities generated their own electricity – by leveraging dynamos, steam power, water wheels, etc. As these technologies improved, centralized generation and broadened distribution led to the creation of electrical utilities.

Most companies used electricity; generating it was another thing. Carr likened that to IT, claiming that IT wasn’t central to a company’s mission and would be cast aside when somebody could consolidate it and do it better. He envisioned maybe six giant companies that would operate IT while everyone else simply consumed it.

Ownership has its privileges

Humor me while I introduce another book that I read from the aforementioned front row. Five years before Carr published his book, Chris Anderson penned NYT bestseller “The Long Tail: Why the Future of Business is Selling Less of More.” The editor of Wired Magazine didn’t explicitly address cloud computing – it was in its infancy then – but he did essentially clap back at Carr’s argument, noting that owning your critical capabilities (e.g., IT) would be imperative as wants and needs diverged. As I predicted, we’ve got some very large players, but we have thousands of clouds, and not all of them are public.

The thinking in 2008 (including mini HD camcorders and hippie headbands everywhere) centered on “The Big Switch” being inevitable because public cloud was cheaper than owning and operating your own – by a lot. And, unfortunately, that assumption held for a long time – until it was finally acknowledged that public cloud plainly delivers on its cost promise early in a company’s cloud journey but the pressure on margins grows as the company scales and it starts to outweigh the benefits (see my blog “The Transformation Knothole”).

While many industry and financial analysts bought the notion of the big switch, happily, Dell was already betting against conventional wisdom and was planning for a very hybrid and multi-cloud world.

Our logic was simple. We agreed that non-differentiating IT functions would be subsumed by public cloud Software as a Service (SaaS). So, we worked with SaaS companies to build their own infrastructure to deliver their services at scale. We agreed that public cloud infrastructure as a service (IaaS)/platform as a service (PaaS) was popular with developers because it was convenient and fast, and removed internal IT from the critical path. We agreed that technology investments would be focused on business differentiating capabilities. That meant that digitally intensive businesses would continue to own and operate technology for multiple reasons, including cost, control and innovation.

Cloud is an operating model, not a destination

So, we set out to enable an ecosystem of cloud operating systems that could be deployed everywhere and operated by anyone. To that end, we signed on as a founding member of OpenStack (the free, open-standard cloud computing platform) and integrated deeply with VMware to simplify operations.

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Preparation, Dell EMC Career, Dell EMC Study
We predicted correctly that public cloud IaaS/PaaS/SaaS was a rising tide that would float all boats. We fought the false notion that public cloud was cheaper, and followed our conviction that cloud was more opportunity than threat. Forecasting we’d need to integrate public cloud services with private cloud, and that the ability to integrate data from across the clouds would be critical, we invested to facilitate that.

In our view, everything pointed to a multi-cloud world solved with hybrid cloud solutions.

Conventional wisdom mocked our conclusion, suggesting that we invented the concept to beat back public cloud and change customer sentiment about it.

Flood alert in Texas: the digital kind

A decade later, conventional wisdom is finally catching up. Even the public cloud players are enabling customers to run their cloud operating systems where they need them; think Microsoft AzureStack HCI, Google Anthos, AWS EKS Anywhere – all of which allow end-users to deploy on owned and operated hardware of their choice.

Cloud truly has been a tide to float all boats. In fact, it is more like a digital flood, spreading everywhere and the challenge is on to enable businesses to extend their cloud operating models.

They laughed when we said it would be a hybrid and multi-cloud world, but we kept a straight face while challenging conventional wisdom to forge our path forward. Maybe it’s our Texas roots that keep us pragmatic, open minded and flexible (and too hot for headbands). With true southern hospitality, we always put the needs of our customers first and check dogma at the door. It has served us and our millions of customers well.

P.S. Back to Nicolas Carr’s Big Switch theory of IT. The major flaw in his theory was not understanding that it was the Internet that was more like electricity, a shared utility that enables innovation. He did not recognize that IT would morph from a cost center, necessary but not central to the business of business, into THE business. He locked into considering IT against a backdrop of a highly analog world – something at odds with the core mission of the business – and didn’t see the digital revolution coming. The revolution that would make IT technology central to nearly every sector of the global economy.

Source: delltechnologies.com

Thursday, 19 August 2021

End-to-End AI is Within Reach — Are You Ready?

Dell EMC Study, Dell EMC Tutorial and Material, Dell EMC Preparation, Dell EMC Career, Dell EMC Study Materials

For today’s digitally driven enterprises, artificial intelligence applications are growing in importance. Many forward-looking enterprises are now rolling out or laying the groundwork for AI-driven applications that automate and enhance business processes and services. And the future promises to bring much more of the same.

For IT and business leaders, the rise of AI in the enterprise is much more than an incremental change. It’s a sea change that calls for the development of end-to-end AI strategies and new supporting capabilities in the underlying IT infrastructure. This is an important takeaway point from a new IDC white paper — “End-to-End AI is Within Reach” — that outlines key considerations for enterprises moving to processes and services driven by artificial intelligence.

Here are some of IDC’s observations from this thought-provoking white paper, sponsored by Dell Technologies.

AI-driven applications will span the enterprise

◉ IDC expects that within the next few years, AI will start to permeate business processes for most enterprises. In general, more data will drive better products and services, improved customer experience and more relevant business insights.

◉ Big data analytics applications leveraging artificial intelligence will drive better business insights, fueled by the massive amounts of data that enterprises will collect from their products and services, employees, internal operations and partners.

◉ As business models become much more data-driven, the key challenge for enterprises will be to identify and capture the data they need to improve their offerings and then use that data effectively to drive value for the business and its customers and partners.

Enterprises needs an end-to-end AI strategy

◉ To make the most effective use of AI-driven big data analytics, enterprises will need to create an end-to-end AI strategy that is well integrated across three different deployment models — from edge to core data center to cloud. IDC says that because of the many new requirements of this hybrid, multi-cloud strategy, almost 70 percent of IT organizations will modernize their IT infrastructure over the next two years.

◉ Enterprises successfully deploying AI will have their AI infrastructure distributed across edge, core and cloud deployment locations, each of which will exhibit different workload profiles. Rather than thinking about AI infrastructure as a series of point deployments in different locations, enterprises should strive to craft a well-integrated, end-to-end AI infrastructure strategy that leverages each of these deployment locations effectively.

◉ There will be a proliferation of data capture points as enterprises glean data from edge devices, their own products and services, employees, supply chain partners and customers. Data needs to stream freely and where it naturally settles in a storage environment. After having been leveraged for insights, data needs to be joined by compute to perform more analysis.

AI workloads place new demands on IT infrastructure

Dell EMC Study, Dell EMC Tutorial and Material, Dell EMC Preparation, Dell EMC Career, Dell EMC Study Materials
◉ AI workloads will demand many new capabilities from the underlying IT infrastructure. Getting the underlying infrastructure right is a key determinant of success as enterprises look to AI to help drive better business decisions. Enterprises should consider the infrastructure requirements for AI from three angles — scale, portability and time — as they modernize their IT infrastructure for the data-centric digital era.

◉ Enterprises will build their infrastructure using both general-purpose and accelerated compute, distributed unstructured storage platforms, a mix of different storage technologies, and AI-driven systems management, as well as new AI framework tools like PyTorch and TensorFlow.

◉ IDC has released an “Artificial Intelligence Plane” model to help customers better understand how to create the right ecosystem to maximize the contribution AI-driven workloads deliver. The underlying storage infrastructure is a key component in that model, and it is already clear from end-user experiences over the last several years that legacy architectures will generally not provide the right foundation for long-term AI success.

◉ While each phase of the AI pipeline requires some type of performance-intensive compute, AI model training is especially demanding due to the large amount of parallelism involved. There are various types of compute resources that are suitable for the different AI pipeline stages.

Dell Technologies can help you get there

◉ Dell Technologies markets a range of systems for every AI scenario, allowing businesses to grow their capabilities at their own pace as their needs shift and as their data sets grow. Deployment scenarios with Dell Technologies solutions include data center, edge, cloud and multi-cloud, with the compute brought to the data rather than the other way around.

◉ To help their customers succeed with AI, Dell Technologies has put together its Dell EMC Ready Solutions for AI. These engineering-validated stacks make it easy for enterprises to buy, deploy and manage successful AI projects, offering not only the underlying IT infrastructure but also the expertise to create optimized solutions that drive real business value.

◉ With its broad IT infrastructure portfolio, including compute, storage and networking resources, and AI ecosystem partnerships, Dell Technologies can bring the right resources together with an end-to-end AI focus that drives competitive differentiation for its customers.

Source: delltechnologies.com

Tuesday, 17 August 2021

Digital Cities of the Future

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Guides

While the advent of emerging technologies brings tumultuous change to rural living and work, it also carries a sense of hope – aspirations of better living. These combined forces are fueling the migration towards cities. This change is taking place globally, with Asia and Africa seeing the most rapid changes, and the Americas seeing slower adoptions of the same trends. Due to the ever growing economic opportunities provided in cities, people are continuing to leave their rural lives in favor of urban centers.

It is estimated that by 2050, cities of the world will be home to over 6 billion people and no less than 2 billion personal vehicles, mostly cars. Taking North America as a lifestyle benchmark for the world, we are on track to strip the earth of five times its available resources in order to support an extravagant consumption model. As of today, cities use an alarming 78% of energy consumption and produce 70% of greenhouse gas emissions.

Cities need to go through a paradigm shift to address the latent and emerging needs of their citizens and infrastructure. We believe transformation value is maximized when cities are built on a modern digital core that scales up to a “system of systems” that connects all the key operations to build one integrated picture. With a wealth of experience in IT transformations across verticals, industries and cities, Dell Technologies and its extensive ecosystem of partners are committed to create architecture that is open, scalable, agile, secure and future proof. Dell Technologies Digital Cities teams are leading transformative changes in the way cities operate and maximize output.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Guides
The Citizen Impact Awards were created to recognize solutions, empower teams across Dell Technologies and to show appreciation for delivering outstanding efforts and results. Our objective with this recognition program is to celebrate team and individual achievements that provide value to Digital Cities and citizens. This program aims to celebrate and encourage innovative achievements that advance the urgent goals of sustainable living and productivity growth while delivering excellent outcomes for citizens, customers and partners.

Some of the top solutions we celebrated with the Citizen Impact Awards include:

◉ A step in the direction of saving the Great Barrier Reef, Australia. Edge-IOT devices were deployed on boats to facilitate transfer of images and data from the cameras of visitors and divers. This allowed crowdsourcing of gathered data, helping scientists gain better insight into the damage on the Great Barrier Reef. For the public, these pictures raise awareness about the seventh wonder of the world and the dangers it faces due to rising sea temperature caused by climate change and pollution.

◉ Helping manage traffic flow in city of San Jose, California, USA. We partnered with Intellisite and NVIDIA to help the San Jose, CA traffic department study the flow of traffic and understand the anomalies that lead to traffic snarls and accidents. Understanding the causes of these anomalies leads to better management of vehicular traffic, reduced fatalities and improved productivity in an economic epicenter.

◉ A smart stadium for national and international soccer matches, shows and concerts, Coni – Olimpico Smart Stadium, Italy. Coni – Olimpico Smart Stadium partnered with Dell Technologies to manage crowding and distancing by installing new systems as they prepare to host Liga-Series A in September 2021. Dell Technologies installed a smart platform, video surveillance software, storage and servers, turning the stadium into a smart one. This eliminates the threat of un-ticketed people gaining access to the stadium, increasing safety and security.

Source: delltechnologies.com

Monday, 16 August 2021

Shattering Bottlenecks with GPU Acceleration in Cloudera Data Platform

Dell EMC Study Material, Dell EMC Preparation, Dell EMC Career, Dell EMC Learning, Dell EMC Guides, Dell EMC Certification

In today’s enterprises, data flows like water cascading down a mountain river in the spring. Massive amounts of data continually stream into edge, core and cloud systems, creating both challenges and opportunities for IT and business leaders.

On the challenges side, IT administrators need to capture, curate and store data in a mix of structured, semi-structured and unstructured formats and make it all readily available to modern applications, like those for data analytics and machine learning. On the opportunity side, data scientists and business leaders can now innovate with data to gain insights, optimize processes and help the business move faster than the competition.

And this is where NVIDIA GPU-accelerated Cloudera Data Platform comes into play. This groundbreaking analytics solution, now available on NVIDIA Certified Systems from Dell Technologies, integrates NVIDA’S RAPIDS  Accelerator for Apache Spark 3.0 to accelerate data pipelines and push the performance boundaries of data and machine learning workflows. With this proven combination of leading-edge technologies, organizations have what they need to accelerate the development and delivery of high-quality data and analytics applications that power AI across the enterprise — without changing code or reworking projects.

This is great news for data scientists and others who wrestle with bottlenecks created by massive amounts of data and slow compute. These bottlenecks directly impact the cost and speed at which companies can train and deploy models across the organization. But now, that’s old news. Today, with an NVIDIA GPU-accelerated Cloudera Data Platform, data scientists can execute end-to-end data science and analytics pipelines on NVIDIA Certified Systems to improve machine learning model accuracy by iterating on models faster and deploying them more frequently.

Wide ranging use cases

Cloudera Data Platform supports a wide range of use cases that span from improving operational efficiency to driving business transformation. On the operational side, CDP use cases include data warehouse augmentation by offloading ETL (extract, transform, load) workloads, log aggregation and analytics, dual storage and active archive, and archive-intensive and tiered Hadoop storage.

On the business transformation side, CDP supports diverse use cases for marketing, finance, healthcare, pharmaceutical and manufacturing applications. We’re talking about applications that help organizations anticipate customer needs, detect fraud and reduce risk, improve patient care and reduce healthcare costs, ensure regulatory compliance and validation, and achieve continuous process improvement.

And now, with NVIDIA GPU-accelerated Cloudera Data Platform on NVIDIA Certified Systems from Dell Technologies, organizations can accelerate these analytics use cases while reducing data science infrastructure costs.

What’s under the hood

With NVIDIA Certified Systems from Dell Technologies, organizations deploying NVIDIA GPU-accelerated Cloudera Data Platform can take advantage of the latest and greatest hardware to accelerate the development and delivery of high-quality data and analytics applications.

Validated for running accelerated workloads with optimum performance, manageability, scalability, and security, these systems include the Dell EMC PowerEdge R750xa. It’s a two-socket server with the latest 3rd generation Intel® Xeon® Scalable processors with the capacity for up to 4x double-width or 6x single-width PCIe NVIDIA GPUs. NVIDIA NVLink bridges allow pairs of A100 PCIe GPUs to share memory while multi-instance GPUs allow for up to seven independent instances per A100, making it easier to designate and share accelerated resources. For a look inside this incredibly flexible server, check out the Dell EMC PowerEdge R750xa video above.

And with Cloudera End of Support (EoS) dates approaching for many legacy products, this is a great time to migrate to CDP on NVIDIA Certified Systems — and futureproof your data center for AI.

Dell EMC Study Material, Dell EMC Preparation, Dell EMC Career, Dell EMC Learning, Dell EMC Guides, Dell EMC Certification

As you accelerate into this move, Dell Technologies has the full range of Ready Solutions for Data Analytics available including Dell EMC PowerEdge R650 server admin/head nodes and Dell EMC PowerScale Isilon H600 storage, along with your Dell EMC PowerEdge R750 accelerated worker nodes.

To test drive NVIDIA GPU-accelerated Cloudera Data Platform, visit one of our worldwide Customer Solution Centers. And to explore the wide range of Dell EMC Ready Solutions for Data Analytics, visit here.

Source: delltechnologies.com

Sunday, 15 August 2021

Bringing Together the Open RAN Ecosystem

Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Study Material, Dell EMC Guides, Dell EMC Learning

There is little doubt that communication service providers (CSPs) will need to extend and increase their RAN capacity to accommodate new 5G services. CSPs are using their 5G buildouts as an opportunity to move away from legacy, proprietary RAN technology to an Open RAN architecture. Current legacy RAN technology has some advantages, such as performance, and it is a complete end-to-end solution with one neck to choke for support. However, the proprietary and closed interfaces have resulted in a lack of innovation and overall control of the RAN, frustrating CSPs and motivating them to look for an alternative solution. As they transition to 5G, the ability to innovate at the edge of the network and meet the performance and latency requirements is critical to take full advantage of what 5G offers.

Open RAN standardizes the interfaces between the radio unit (RU), distributed unit (DU), and centralized unit (CU), opening the door for new RAN vendors and drive new, innovative solutions. The open interfaces enable a diversity of suppliers, increasing flexibility for CSPs as they can choose which vendors they want to deploy. This will result in multivendor solutions, a drastic change from the current RAN environment of proprietary vendor lock-in solutions.

Along with the promise of no vendor lock-in and accelerated innovation, Open RAN also brings about some new challenges. Open RAN’s flexibility enables CSPs to pick and choose which vendors they deploy in their network will also create some complexities. For example, a CSP might choose to deploy one RAN vendor in one geographic location and another RAN vendor in another location based on cost, performance, and customer needs. Or, a CSP could utilize one vendor for the virtualized Distributed Unit (vDU) at the cell site and another vendor for the virtualized Centralized Unit (vCU) as an aggregation point in the network. As great as having this flexibility is for CSPs, it also creates complexities since multiple vendors must be integrated to work together seamlessly into one solution.

To help alleviate this challenge, Dell Technologies is working with a diverse and loosely coupled open ecosystem of partners to develop validated solutions that will alleviate some of the integration complexities of multivendor solutions. Together with our partners Intel, VMware, and Mavenir, we’ve developed an Open RAN solution reference architecture that is now available to CSPs as a technology preview. The preview offers a complete Open RAN solution, including Mavenir’s virtualized, containerized vCU, vDU, vRU functions, VMware’s telco cloud platform, and orchestration tools deployed on Dell’s telco-grade PowerEdge servers featuring Intel scalable processors. As a pre-validated technology preview, the Open RAN reference architecture gives CSPs a trusted, best-of-breed solution to build out the 5G RAN of the future.

Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Study Material, Dell EMC Guides, Dell EMC Learning
The solution includes technology from the following:

◉ Mavenir: Centralized Management Service (mCMS), Smart Deployment as a Service (SDaaS), Analytics, 5G Centralized Unit-Control Plane (CU-CP), Centralized Unit-User Plane (CU-UP), 5G Distributed Unit (DU), and Mavenir Telco Cloud Integration Layer (MTCIL)

◉ VMware: Virtualized Infrastructure, Platform as a Service (PaaS), Container as a Service (CaaS) and the Orchestrator and Automation layers

◉ Dell Technologies: PowerEdge XR11, XR12, and R750 servers power the Open RAN solution

◉ Intel: 3rd generation Xeon Scalable server processors, eASIC hardware acceleration, and FlexRAN 5G reference architecture

With our commitment to Open RAN, Dell Technologies is leading a cross-industry initiative to bring technology from the world’s leading RAN vendors to bear on the most critical challenges that CSPs are facing today. Our deep and long-standing relationships with VMware, Intel, and Mavenir allow us to develop and test solutions that embrace open standards, multivendor ecosystems, and the latest market innovations.

Open RAN technology is critical to the success of 5G. It opens the future to more innovation, smoother and more reliable supply chain operation, and competitive pricing. Dell Technologies is committed to Open RAN technology through its partnerships, industry alliances, and growing investment in 5G research, development, and services. To find out more about the technology preview, download the solution brief or the tech preview reference architecture.

Source: delltechnologies.com

Saturday, 14 August 2021

Fuel Azure Hybrid Cloud With a Validated Infrastructure

Azure Hybrid Cloud, Dell EMC Career, Dell EMC Preparation, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning

When it comes to cloud strategies, a hybrid cloud approach makes it possible to select the optimal location for your data. However, when data is spread across the edge, private, public, and multi-cloud environments, management can become complex. For many customers, the questions usually start with where do I begin, how do I go about integrating all the foundational components (OS, containers, database, etc.) and how do I manage all these data locations? By adopting modern tooling and taking a walk, run, fly, approach to ramping up their tooling and management skills with Azure Arc, they CAN achieve their goals.

Azure Hybrid Cloud, Dell EMC Career, Dell EMC Preparation, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning

What is Azure Arc?


Governance and rapid adoption to DevOps maturity – the continuously evolving customer request. There are proven steps to assess DevOps maturity, but if your organization is trying to measure DevOps maturity, they are not there yet. Embracing Azure Arc as your common control plane, with Azure Resource Manager (ARM), will fulfill the hybrid requirement, that many IT departments are being asked to deliver. With the added benefit of aligning to the initial DevOps maturity steps. Some core tenants, such as Linux, Kubernetes, and ARM templates are required.

If the public cloud simply will not work because of sensitive data or round-trip latencies, a hybrid solution is the logical choice. With Azure Arc, that same public control plain is extended into your data center, or all the way to the edge. Additionally, you can deliver as-a-service offerings to your end-business consumers.

Simply stated – Azure Arc simplifies complex and distributed environments across edge and core on-premises, and multi-cloud (locations and environments) by bringing Azure services to ANY infrastructure while aligning with these prescriptive benefits:

◉ Gain central visibility, operations, and compliance
◉ Build cloud native apps anywhere, with manageable scale
◉ Run Azure Arc enabled data services anywhere, just announced by Microsoft.

Azure Hybrid Cloud, Dell EMC Career, Dell EMC Preparation, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning

The truth is, your business users may already be connecting to Azure, or multi-public-cloud, creating a huge opportunity for providing data governance and logging analytics. How will you control this? Azure Arc can help. The complexity is below the control plane and presented seamlessly with Azure Arc.

To fully embrace Azure Arc, you‘re sort of compelled to adopt newer technologies into your organization. These improvements include Linux, Kubernetes, the Azure Portal, command line scripting and other innovative new technologies.

The promise of hybrid and multi-cloud – actually, implemented


Azure Arc extends Azure services and management to any infrastructure. With Arc, you can now consistently build, deploy, operate, and manage all your workloads running on-premises traditional nodes, physical or virtual environments (hopefully virtual), cloud-native and edge applications, all using a consistent control plane manner.

Azure Arc strikes the balance between traditional server workloads and modern containerized workloads, operating in the exact same context of the hybrid and multi-cloud environments. It brings the Azure control plane to your data center with:

Azure Hybrid Cloud, Dell EMC Career, Dell EMC Preparation, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning

The following architectures have been validated for the Azure Arc control plane:

Dell EMC PowerMax with Azure Arc-enabled data services back-ended with PowerMax can consolidate to 64,000 devices/LUNs! Again, those containerized database workloads are easy to manage with Azure Arc leveraging PowerMax’s industry-leading capabilities. PowerMax provides plenty of headroom for replicas and unexpected bursts in storage growth which Kubernetes requires as a solid foundation to maintain its desired deployment state.

◉ Dell EMC PowerFlex delivering truly flexible software-defined infrastructure, PowerFlex aligns perfectly to the core tenets of Azure Arc. PowerFlex offers the flexibility to consolidate heterogeneous workloads – across bare-metal OSs, hypervisors and container platforms workloads – without physical segmentation or re-clustering. Organizations can grow from just a few nodes to hundreds, on-demand and non-disruptively, with linearly-scaling performance and network throughput.

◉ Dell EMC PowerStore the PowerStore storage appliance is purpose-built for container-based software architecture. This modularity enables feature portability and maximum deployment flexibility automatically balancing storage and workloads to maximize system utility. Each active-active PowerStore appliance can grow to over 2.8 PB effective capacity, and multiple appliances can be clustered for greater performance. This is an excellent fit for Azure Arc data services options like PostgreSQL Hyperscale and of course SQL MI.

◉ Dell EMC Integrated System for Microsoft Azure Stack HCIAzure Kubernetes Service on Azure Stack HCI (AS HCI) enables organizations to implement Azure Kubernetes Service (AKS) on-premises. You can take advantage of the ease of use and high security capabilities delivered for containerized applications at scale. Additional integration with Windows Admin Center (WAC) adds another level of management visibility and control, including Dell Technologies integration with WAC. AKS on AS HCI simplifies the process of setting up Kubernetes.

◉ Dell EMC VxRail with Tanzu Tanzu Architecture for VxRail, or TA4V, is a purpose-built, validated, tested reference architecture built on VxRail. TA4V is designed for cloud-native workloads with PaaS and CaaS services and delivers a fast and easy way to consume Tanzu Kubernetes Grid.  Tanzu Kubernetes Grid on vSphere has been certified for Azure Arc and Azure Arc-enabled data services. This pairs the power of the Tanzu portfolio and the value and flexibility of Azure Arc-enabled data services to offer customers the best of both worlds. With the operational automation from VxRail, companies can seamlessly scale the infrastructure to meet their cloud native application development needs as they grow.

Dell Technologies APEX Data Storage Services


APEX Data Storage Services is Dell Technologies storage-as-a-service offer. It is a portfolio of scalable and elastic storage resources, designed for OpEx, that delivers simplicity, agility and control to our customers’ storage environments. Now you can consistently align expenses and usage with services designed for 99.9999% availability. This offer has also been validated for Azure Arc, and the two align perfectly. APEX and Azure Arc complement each other when it comes to managing elastic mission critical on-premises workloads — especially databases. Since APEX is based on the aforementioned Dell Technologies validated architectures, you can see that APEX Data Storage Services are compatible with many permutations of common workload choices. As-a-Service offerings, in your datacenter, aligned with both hardware and software are now delivered by Dell Technologies and Microsoft.

Proactive cloud-based monitoring of on-premises infrastructures with Dell EMC CloudIQ


To add even more value to the on-premises landscape, the Dell Technologies offering of CloudIQ, greatly complements any Azure Arc on-premises management. CloudIQ brings a cloud-native Machine Learning (ML) engine to your on-premises storage subsystem. Some capabilities that CloudIQ will deliver dynamically include intelligently predicting capacity, identifying “noisy neighbors” (key for database environments), finding reclaimable storage, plus many others. CloudIQ is supported on PowerMax, PowerFlex, PowerStore and VxRail.

Engage our Dell Technologies Services


In interactions with our Dell clients we discuss a Canonical Model, which is now referred to as the Holistic Model, which I discuss in another blog here. Each time the story is the same: complexity can be over-whelming when architecting a dynamic solution. Dynamic solutions are what your end user customers demand but your business users want “easy”. There are many skill sets required across the layers. Our services teams bridge the technologies and concepts listed above, that many of your teams may not be comfortable with yet (like Linux and Kubernetes). This is where Dell Technologies Services can help. From strategy, implementation, adoption, and scale, by partnering with Dell Technologies you can confidently extend your Microsoft Azure environment to on-premises and to the edge, accelerating innovation and delivering even more value to your business.

Now that you have a better understanding of the business value and opportunities of all things Azure Arc, read my technical blog where I describe details of tooling, configuration, and enablement of Azure Arc. There is so much more to talk about as we’ll dissect the solution and illustrate how all the pieces of the puzzle fit together.

Source: delltechnologies.com

Friday, 13 August 2021

APEX Lets Dell IT Tackle Higher Value Needs

APEX Tutorial and Material, Dell EMC Certification, Dell EMC Guides, Dell EMC Career, Dell EMC Learning, Dell EMC Study

Using IT-as-a-Service instead of building and managing your organization’s own IT infrastructure is like moving from being a plumber to being an interior designer. Instead of spending energy making sure the pipes don’t leak and the faucets don’t drip, you can focus on creating an overall space that meets the higher needs of its occupants—its livability, function, esthetic.

As customer zero for Dell Technologies APEX—Dell’s recently launched portfolio of subscription as-a-Service offerings, Dell IT is exploring how APEX Cloud Services can elevate our role in delivering IT. You could say the aim is to get us out of the infrastructure basement and into the airier space of higher value services.

To start with, we are working toward using APEX Cloud Services to provide better infrastructure control at third-party manufacturing facilities, simplify management of remote data center locations and gain more flexible infrastructure capacity where and when we need it across IT.

I expect we will pursue an array of emerging use cases as we strive to integrate APEX into our existing IT ecosystem.

Being the first APEX Cloud Services customer 

In April of 2021, Dell Digital, Dell’s IT organization, was the first recipient of  APEX Cloud Services, previously called Dell Technologies Cloud Platform. Dell Digital and Infrastructure Solutions Group (ISG) teams successfully delivered and installed the first subscription-based model in Dell’s Round Rock data center on April 14—eight days after it was shipped—despite facing tornadoes and Covid-19 restrictions. 

After testing the system, which features VMware Cloud Foundation on VxRail, Dell Digital is deploying it for a use case where we don’t have local on-site support. This first deployment will be to enhance a disaster recovery/business continuity use case we needed to solve quickly and efficiently. We were able to get it up and running fast and take advantage of flexible capacity going forward. 

That is a first step towards utilizing the tremendous potential APEX Cloud Services has to help us with our core capacity. Over the next year, we will be working to integrate its offerings into our current IT ecosystem to gain simplicity, agility and control. 

Control at the edge

Among the use cases we are exploring is using APEX Cloud Services at the edge of our network or in some countries where we do not have a lot of infrastructure team members. With APEX Cloud Services we can potentially maintain and operate infrastructure in a consistent way remotely, including turning over management to the APEX Cloud Services team. Besides flexibility and simplicity in these locations, APEX can also be a part of maintaining security and governance controls.

For example, we have some instances where we specify infrastructure standards to third-party vendors and they operate that infrastructure. This would give us the ability to specify APEX, where the APEX Cloud Services team would maintain the infrastructure to a global standard not only on the infrastructure used, but the methodology for operating and maintaining it going forward.

In an increasingly edge-focused IT world, the gain would be to bring the control boundary of this infrastructure back inside the Dell tent, which would be a big win for us. It would also help us gain agility in these facilities, with the ability to get resources online faster and refresh them faster.

APEX Tutorial and Material, Dell EMC Certification, Dell EMC Guides, Dell EMC Career, Dell EMC Learning, Dell EMC Study

We are also considering the benefits of using APEX Cloud in edge deployment scenarios like manufacturing facilities and some office locations—generally, for handling infrastructure where we want to get enhanced agility and flexibility in places that have traditionally been much more static.

More broadly within our current private cloud, we are looking at how to take advantage of the automation and flexible capabilities that APEX Cloud offers. We are in the process of performing that integration work, which we expect will come together over the next year or so. This would make APEX a natural extension of our own private cloud infrastructure capacity.

Getting to higher value IT

The value to Dell Digital and IT organizations overall of maximizing the potential of APEX Cloud Services is clear. Buying infrastructure components, plugging them in, building the environment yourself and getting it online takes a lot of energy out of the organization just to make sure timelines are met. If you have another way of getting that done, you’re saving all that energy. You can instead focus it on providing value to your customers on top of this infrastructure.

That might include automation of higher-level activities like patching, upgrades, database migrations, resiliency enhancements, etc.—or any number of investments that impact processes and advancements closer to our business users’ work. 

Source: delltechnologies.com

Thursday, 12 August 2021

Unlocking the Digital Transformation of Public Sector and Services

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Guides, Dell EMC Career, Dell EMC Tutorial and Material

With Covid-19 vaccination programs underway and fiscal relief and furlough measures winding down, the focus of international governments is gradually turning from short-term recovery to laying the foundations for sustainable long-term growth. Policymakers are now confronted with strategic choices on how to ensure a fair, equitable and connected society post-pandemic.

This is why, earlier this year, Dell Technologies partnered with the International Data Corporation (IDC), the premier global provider of market intelligence, on a series of info-briefings to help navigate these changes and provide support and recommendations on the keys to unlocking opportunities in the next normal.

In search of an ambitious, citizen-focused recovery and green transition, governments have rightly identified digital transformation as a key driver. Digital is at the heart of national recovery packages across the Americas, Asia and Europe. According to IDC, Germany alone will invest 40% of its €50billion package in accelerating digitalization in various sectors, with most of the stimulus going to broadband expansion, online public services and digitalizing the national economy.

Smart investments in the digital transformation of the public sector have enormous potential to improve social well-being by driving long-term economic growth, accelerating global sustainability efforts and addressing long-term systemic challenges.

In addressing the challenges brought about by the pandemic, a stark lesson for public organizations has been that significant technological changes were needed to succeed in the next normal. Overall, under 10% of public authorities said they had not faced any organizational challenges as a result of Covid-19 and the emergence of new working models.

The keys to successful digital transformation

IDC’s research shows that core to the successful digital transformation of public services are six keys:

◉ Establishing an omnichannel citizen experience

◉ Ensuring trust and security

◉ Promoting data-driven policymaking and service delivery

◉ Adapting to the future of work

◉ Enabling digital inclusion

◉ Building agile, intelligent platforms and infrastructure

The first three keys are critical enablers in accelerating digital transformation across all aspects of public services.

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Guides, Dell EMC Career, Dell EMC Tutorial and Material
Source: IDC, 2021

Omni-channel communications via portals that consolidate government data enable users to access more information and services. The use of APIs, machine learning, and intelligent process automation can make the delivery of public services far more efficient, responsive and economical. For example, they can shorten the process for the notification and payment of taxes or license fees as well as for access to government payments like unemployment benefits.

A pertinent example of the potential of omnichannel communications comes from the city of Vienna, which built on its citizen-centric approach by repurposing some of its more advanced communication channels to deliver Covid-19 relevant information and help maintain compliance with guidance. Using a chatbot (WienBot) to deliver information on new measures, local authorities alleviated pressure on the main information line, preserving a crucial resource for citizens.

Another key building block is security and trust. Digitalization provides citizens with ease of access to public services, but trust in the security of IT solutions is needed for their uptake. This is well recognized today, with just under half of all government agencies identifying digital trust programs as a top business priority heading beyond 2021.

Robust cybersecurity strategies are essential. An early leader in this area is Australia, through its ‘Cyber Security Strategy 2020’, comprising an investment of A$1.67 billion over 10 years, that will target improving the competitiveness of its industry through AustCyber, the Australian Cyber Security Growth Network. By bringing together start-ups, venture capital funds, government agencies and educational institutions, AustCyber will act as a multiplier and connector for the Australian cybersecurity industry.

Public authorities providing secure access to government services must also not discourage users by making things too complex. Digital identity solutions can help address the challenge of balancing security with convenience. France significantly improved its services by enabling France Connect, a log-in mechanism recognized by all public services. At the EU level, just last month the European Commission proposed new legislation introducing an EU-wide voluntary digital identity wallet scheme to enable access to public services securely.

Finally, data should be leveraged as a strategic asset. Governments may invest in innovative solutions such as Big Data, analytics, and machine learning but too often organizational obstacles, legal and political boundaries and insufficient internal know-how mean data is not converted into actionable insights.

The potential of effective data-driven approaches is well illustrated by the STOIC project, started in the Cochin hospital in Paris. In medical care, there is a need for increased data sharing and collaboration to quickly obtain data sets large enough for treatment design. With access to new databases and pilot solutions using AI for lung imaging through STOIC, physicians were able to leverage data to optimize patient treatment plans.

The key to unlocking this potential is reforming data regulation to facilitate access and sharing of data. Through initiatives like the EU’s drive to create data spaces like the European Health Data Space and facilitate business-to-government data sharing through its Data Act, public authorities in the EU will be better able to realize the true value of data.

Building momentum


The digital transformation of public services has the potential to significantly improve societal well-being through more efficient, secure, responsive and human-centric services as well as being a driver for economic growth.

Governments across the world have recognized its importance in their recovery plans. However, while the objectives of these plans are long-term, the window to secure funding is closing. In France, for instance, the strategy has an end date of 2030 but most plans will be executed and funded between 2021–2022.

Public authorities, therefore, face a pivotal moment. To capitalize on the available funding and technological solutions, governments will need to embrace ambitious reform programs for public services, addressing all six building blocks of digital transformation.

Source: delltechnologies.com

Tuesday, 10 August 2021

Arizona State University Gives Minutes Back to Science

Dell EMC Study Material, Dell EMC Preparation, Dell EMC Guides, Dell EMC Certification, Dell EMC Career, Dell EMC Study Material

Scientific research has long been a core part of the mission of Arizona State University. Today, the university holds the distinction of being one of the fastest-growing research institutions in the United States. Since 2002, the school’s annual research expenditures have more than quintupled, topping $600 million, and ASU is among the top 20 universities in the world for the number of U.S. patents issued.

Under the umbrella of this vast research program, ASU scientists explore some of the world’s most complex challenges, including those arising from population growth, urbanization, globalization and technological advancements. Through science, social science and humanities approaches, ASU researchers are creating innovative and sustainable solutions to pressing societal problems.

More Info: DEA-1TT4: Dell EMC Information Storage Management (DECA-ISM)

Many of these research initiatives would not be possible without the computational power of high performance computing, and this is where the ASU Research Computing organization enters the picture. ASU Research Computing is dedicated to enabling research, accelerating discovery and spurring innovation through the application of advanced computational resources to grand challenges in science, engineering and health.

The HPC environment

Among other resources, ASU Research Computing gives the university community access to a Dell Technologies supercomputing cluster that comprises 14,000 CPU cores, 330 GPU devices, 1.2 petabytes of scratch storage in a high-speed parallel file system and 4 petabytes of Dell EMC PowerScale scale-out network attached storage.

This highly heterogenous environment includes a mix of CPUs, including multiple generations of Intel® Xeon® processors, a variety of interconnects, including Intel® Omni-Path, InfiniBand and Ethernet fabrics, and a range of Dell EMC PowerEdge servers. It also includes dedicated virtual machines for specific research environments and a HIPAA-aligned secure computing environment with support for sensitive or federally regulated data.

System users have two ways to access the supercomputing cluster, according to Douglas Jennewein, Senior Director of the Research Computing group at ASU. Users can log in to the cluster via a traditional SSH terminal or via a web-based interface that uses Open OnDemand, an open-source HPC portal based on software developed by the Ohio Supercomputer Center.

“Open OnDemand is a very popular access modality for our researchers,” Jennewein says. “It has really been a game changer for us.”

Powering more users

Easy access to the cluster via Open OnDemand has fueled a rush on resources at the university. ASU Research Computing now accommodates many more users than in the past, including large numbers of academics from disciplines that typically aren’t associated with advanced computing. In a parallel change, ASU Research Computing is increasingly serving faculty members who are using the HPC cluster in the classroom, along with growing numbers of student users.

Dell EMC Study Material, Dell EMC Preparation, Dell EMC Guides, Dell EMC Certification, Dell EMC Career, Dell EMC Study Material
Usage numbers tell the tale of the popularity of the HPC cluster. Over the past year, ASU Research Computing has delivered an average of 8 million core CPU hours per month. In any given month, 500 to 600 users put the system to work, and over the course of a year 1,300 distinct users make use of the system. The HPC cluster typically runs about half a million jobs per month.

“Open OnDemand, the web-based access, has opened up a whole new class or segment of users, folks who are newer to the cluster or newer to advanced computing,” Jennewein says. “This is a much more friendly onramp to HPC than traditional approaches. This has opened up the door for a lot more use cases and domains of science and research that are traditionally under-represented in high performance computing, including humanities, social sciences and the arts.”

An HPC & AI Center of Excellence

Arizona State University is among an elite group of Dell Technologies HPC & AI Centers of Excellence. These centers, found around the world, provide thought leadership, test new technologies and share best practices with technology users. They offer collaborative expertise that spans from high-speed data analytics, AI, machine learning and deep learning to visualization, modeling, simulation and more. They also engage in performance analysis, optimization and benchmarking, along with system design, implementation and operation.

Among other recent joint efforts, ASU Research Computing is working closely with the Dell Technologies HPC & AI Innovation Lab on the development of the Omnia software stack. This open-source software project helps HPC shops speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” Jennewein says. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort. Omnia is something we want everybody to use. We’re paving not just the path for ASU but the path for advanced computing.”

Source: delltechnologies.com

Sunday, 8 August 2021

Helping Your Data Scientists Make You Smarter, Faster

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Guides, Dell EMC Career, Dell EMC Tutorial and Material

When IT launched an Enterprise Data Science platform in 2020 to help data scientists build artificial intelligence (AI) and machine-learning-driven processes (ML) at Dell, a key challenge was whether the 1,800 team members already engaged in data science across the company would adopt it.

With platform use up by 50 percent over the past year and growing, as well as an expanding demand for more data science capabilities, Dell IT is continuing to invest in AI and ML. We are now looking to a much broader challenge on the horizon: making everything at Dell AI- and ML-enabled across all our applications.

Central to realizing that vision is making the data science process better and faster by improving data scientists’ end-to-end experience.

What we’ve learned

In an earlier blog, I described the creation of the Enterprise Data Science platform—Democratizing Data Science, A Federated Approach to Supporting AI and ML. That platform is now supporting more than 650 users and is on course to reach 1,000 users by the end of this year. As a result, the Enterprise Data Science team has been looking more holistically at the data science experience. We’ve discovered several key insights that are helping to focus our ongoing investments.

First, we found that with data science, one size doesn’t fit all. There are three types of data science users—teams that are just forming, nascent teams that have delivered first wins, and mature teams seeking advanced capabilities. While the first type of user wants something that is simple and ready to go, the third type wants a lot of customization that will help them access large compute pools or deploy and integrate models.

We are working to meet these diverse needs with standardization, blueprints and automation. And we are meeting them in another critical way as well, by providing an IT team that data scientists can turn to directly for advice and help.

Although there are different data science personas, all have a few common needs. They all start with data—the need to discover it, acquire it, process it securely and analyze it for patterns and insights. They typically work in an iterative fashion, i.e. get data, analyze the data, interpret it and validate results and then go back to the first step and get more data. It is a “rinse and repeat” cycle to find what supports their hypothesis, which generally stems from a business opportunity identified by a subject matter expert and handed off to engineers and data scientists. The faster this cycle happens to validate hypotheses, the better the results the data science team can deliver.

The Enterprise Data Science platform team is working on tools that will help speed up these repetitive processes—in particular, automation of data access and processing, which is where data science practitioners spend the most time.

Currently, data scientists must figure out on their own where to find the data they need across Dell’s various databases and how to access it. They might search through tables or ask around to find what they are looking for. Imagine hundreds of people doing that repeatedly and in silos. As they do this work, they are effectively finding and creating value from information by defining what is the data that is most useful to optimize a process.

Our goal is to help data scientists to move faster, as well as to capture the valuable data they create. To achieve this, we are collaborating with our data scientists to identify the top areas they get data from and come up with a solution that will standardize the process to discover, acquire and process data from these locations. The idea is to allow them to access data instantaneously and securely from our Dell Data Lake and any other main data repositories across the company.

Once the data scientists obtain the data, we provide them the support to version, document, test and catalog the new data features they create, to make them available across other data science teams.

All our capabilities are driven by APIs (Application Programming Interfaces) which we bundle into internally developed SDK Packages (Software Development Kit) for data scientists. This helps them easily engage with our technology through the language of their choice (e.g. Python) in a very simple and efficient manner.

Getting to AI models faster

Beyond the data piece of their work, data scientists share common needs around the other steps of their development process, including using algorithms to solve the problem they are tackling and then building and training the model that delivers the desired result.

As we’ve scaled our support of data scientists, the team has identified that most data scientists set up very similar algorithms for specific functions in their models, and that invariably, they begin their work using small amounts of data and then focus on making their models scale with time. Yet, we see each one of our data science team members starts from a blank slate on each project.

The Enterprise Data Science team has a sub-team that specializes in DevOps for AI and ML, working to provide templates and infrastructure setups to help data scientists go faster in getting their models up and running and scaling them more efficiently. Our aim is to enable projects to go faster from ideation to production. To achieve this, our software engineering team is working closely with data scientists to first understand and drive several use cases to success, and then identify where the process is repetitive and create solutions.

Our initial work has guided us to begin creating baseline algorithms, which means data scientists won’t have to start from scratch in creating their models each time. In a similar fashion, data scientists can tap into blueprints that help them easily parallelize workloads, leverage specialized compute instances (such as GPUs) and help them train and re-train algorithms at scale. Our first templates are included on every workspace in our AI/ML platform and they just have to open them and make their modifications to get going. Our initial data tells us our users can go from idea to production six to 10 times faster using these tools.

Smoothing out the last mile

From a customer and business perspective, the most important step in data science is the “last mile” of the process— when AI and ML models are implemented into Dell applications to gain value from new insights and innovations they bring. Here too, the Enterprise Data Science platform team is working to add speed and efficiency by providing templates, training and support.

To accelerate these tasks, the team places a lot of focus on skills transfer and training. On the one hand, we have to train data scientists to build more deployment-ready models, using standardized technology that our engineers can understand and quickly implement into apps. On the other hand, we have to help engineering teams become more familiar with data science technologies to smooth out deployment at production scale.

The team is currently focused on seven production engagement cases for pushing new data science models into IT apps. This will help our engineers to define patterns toward standardization and creating common architecture. We hope to reduce such implementations, which could previously take several months, down to  six to eight weeks by the second half of 2021.

Dell EMC Study Materials, Dell EMC Preparation, Dell EMC Guides, Dell EMC Career, Dell EMC Tutorial and Material

Data science, AI and ML are the areas of technology that change the most and at the same time represent a big opportunity for us to improve our customer experience and business outcomes. We have made great strides in improving the data science process that is fueling innovation across Dell’s business units and will continue to develop standard and automated capabilities to add efficiency.

But perhaps our biggest success is supporting data science in a much more direct, non-automated interaction. When data scientists and engineers have questions, they can reach out and someone in IT will pick up the phone. And we learn from each interaction. That’s what our Enterprise Data Science team is all about.

Source: delltechnologies.com