Saturday 29 October 2022

Analyze Data Relationships at Speed and Scale


In order to explore relationships between extremely large volumes of data with speed and efficiency, graph analytics have been emerging to fill the need when that is not easily achieved with traditional or relational databases.

The logical structure of tables and columns used by relational databases makes it difficult to connect all the information and analyze how it is linked, especially when trying to analyze relationships in multiple layers. The more layers you try to analyze, the harder it gets from a relational database perspective, given that you need to access more tables. The chart on the left, shows how all the data is connected to what you are trying to analyze. On the right, graph-powered analytics was able to connect all the data using a node and edge approach, where edges represent the connections between two nodes.

Dell EMC, Dell EMC Study, Dell EMC Career, Dell EMC Certification, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation

The ability to understand the relationship between different, and dispersed, data points makes it possible to perform deeper analysis and deliver the results as information that can be easily understood and acted upon. If you think about healthcare and all the different applications used to collect data about a patient, you can quickly see how this approach to analytics has tremendous value. The image below, generated by TigerGraph for UnitedHealth Group, illustrates how information derived from graph analytics can be presented to a doctor as an interconnected timeline so everything can be seen at a glance to better serve their patients.

Dell EMC, Dell EMC Study, Dell EMC Career, Dell EMC Certification, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation
Source: Courtesy of TigerGraph.

Knowledge graph analytics can also translate into significant savings in time and money. For example, UnitedHealth Group implemented TigerGraph in its contact centers and estimated it has saved $150 million a year by enabling its medical professionals to provide accurate and effective care path recommendations in real time. TigerGraph has been one of the leading vendors in the graph analytics field, and Dell Technology customers have had great success when implementing it on Dell infrastructure. There are a few major features that makes TigerGraph a great partner:

◉ Graph 3.0 technology: TigerGraph is landing on the third and latest existing version of graph analytics, compared to its main competitors that are using the 1.0 or the 2.0 versions.

◉ High performance: The whole graph is located in memory, which means that to read the data you do not needed to access the CPU, hard drive or any SSD’s. All the data is on compute.

◉ Parallel loading and updates: For relational or traditional database, any updates are held in the batch form, which force updates to sit on a queue in the system until the next routine update time. But for TigerGraph, as soon as the data is being updated, the graph will be updated immediately.

◉ Data Sharding: Within TigerGraph, partitioning the data is native, so the graph data is split within the cluster within each server. This way, you can do parallel computing, enabling a faster performance. And as there is redundancy, if one server is down, then it will still be able to function.

◉ Scalability: While most other vendors have a graph size limitation of one terabyte, TigerGraph doesn’t have any limitation, making it scalable solution. Also, it is native within the TigerGraph software to scale up and down, so more servers can be added as needed and it will become part of the cluster.

◉ MultiGraph: This security and privacy feature refers to when the data set is loaded up into a graph form, the whole graph is going to be generated in the back end. However, depending on the role-based access that each user has, they are only going to be able to see the portion that they are allowed to see.

◉ Deep-links analytics: The depth of the data analysis is measured in how many hops or levels. When relationship analysis is done, the deeper the layer that you go into, the harder it becomes and more compute and memory resources are needed. TigerGraph is the only software able to do 10 plus levels of analytics.

◉ Storage efficiency: The raw data is compressed so it saves lot of storage space, making it efficient within the compute cluster.

Source: dell.com

Thursday 27 October 2022

Delivering IT as Easy as Ordering Takeout

Dell EMC Study, Dell EMC Prep, Dell EMC Preparation, Dell EMC Certification, Dell EMC Guides, Dell EMC Guides, Dell EMC Learning

If we can track our lunch deliveries down to the minute and pick up online orders at the store in a matter of hours – why should it be difficult for your workforce to get what they need from IT, or time consuming for IT to provide it?

The answer, of course, is that it shouldn’t be. If your IT department isn’t delivering this level of service, that doesn’t mean the restaurant down the street has better IT. They’re just delivering a better experience using the resources and technology available to them.

Instead of handling endless phone calls for the same requests – placing orders, requesting customization, asking what’s on the menu – everything is within reach from an online app or portal and their staff is free to take care of other business needs.

The situation is similar in a traditional IT environment. Your technical staff is busy with repetitive requests. Employees may not even know what services are available to help them succeed, or they may simply view IT as a repair shop. Overall, you are neither getting the most value from your IT department, nor providing the best experience for employees.

With the right approach you can provide a customized, consumer-grade experience that is easy for employees to use, drives adoption of IT standards and reduces phone calls and support costs — freeing up IT to innovate and add more value to your organization. Here’s how:

Use a Self-service Marketplace to Handle Day-to-day Requests


Think about the process you go through to order lunch online. You open a food delivery app or the restaurant’s website and it’s easy to find what you want. Hungry for a sandwich? That section of the menu is a click away and your options are clear.

A self-service IT marketplace works the same way, acting as a digital storefront for IT that makes it simple for employees to find what they need and get on with their day. Everything from ordering a replacement laptop to requesting developer resources, software licenses, tutorials and more are easily accessible from a “shoppable” interface.

Most of the time, employees can find what they need anytime they need it without involving an IT professional, but when direct assistance is needed, requesting help is simple via the same interface.

This approach alleviates many of the day-to-day IT requests, but a digital IT marketplace alone isn’t enough. It takes automation and personalization to get the most value from your digital services management toolset.

Let Automation Do the Heavy Lifting


If your self-service marketplace is the storefront, automation is the fulfillment engine. Just as your takeout order is automatically routed to the right places (sandwich order is sent to the kitchen, drink order to the front counter), automation combines repeatable tasks and company policies to drive end-to-end resolution of IT requests.

For example, if an employee needs a replacement laptop, they can simply log in to the marketplace and “shop” the hardware or bundles available to them. Since IT can define equipment parameters based on personas or roles, employees will only see options that are tailored to help them get their job done efficiently.

Would you like fries with that? Automation also helps predict additional resources that may enhance the request, like a docking station or an extra power cord while traveling.

Based on your company policies, automated workflows can handle approvals of standard requests and move them directly to fulfillment. Exceptions that need review can automatically route appropriately. With this process, IT benefits from an efficient balance between role-based enablement and cost control, and employees can easily locate order status and tracking information.

The same process can be applied to software license requests or developers needing virtual machines, containers, or additional development environments.

Personalize the IT Experience for Enhanced Utilization


When an online storefront lets you customize your order,  remembers your past purchases and makes recommendations that you actually want, aren’t you more likely to return? In the same way, personalization plays a key role in ensuring employees have a great experience with IT services, devices, applications and infrastructure.

Personalization also allows IT spending to be optimized by role, and when combined with key integrations you can facilitate frictionless workflows for procurement, data ingestion and more to further optimize the support and device lifecycle experience for both IT and employees.

Optimize IT Operations and Experiences with Strategic Integrations


Beyond personalization, strategic integrations add a little magic to the support experience for users and IT professionals. Consider integrating platforms like Dell Premier for optimizing procurement workflows and Tech Direct for modern PC management.

In addition, third-party integrations enable a multi-channel experience, making your self-service marketplace available directly within the tools your employees use the most. For example, integrating with Microsoft 365 or Teams provides your workforce with a familiar interface to easily locate the resources, applications, devices and support they need to get work done.

Empower Your Workforce with Consumer-grade Experiences


From ordering a sandwich to replacing a broken peripheral – tailored, automated digital experiences have never been more important, nor more attainable. With the right technology, tools and processes you can increase workforce productivity, IT efficiency and worker satisfaction while providing the consumer-grade experience we’ve all come to expect from our technology interactions.

Please reach out to a sales specialist to discover how Dell Technologies can help you implement the right combination of Digital Services Management technology, Workforce Personas, automation and proven processes to accelerate your next generation IT marketplace.

Source: dell.com

Sunday 23 October 2022

Are You Familiar with the Monster in the Cloud?

Dell EMC Study, Dell EMC Prep, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs

While the world gets to put away their scary stories and sleepless nights come November, cybersecurity teams continually wrestle with fears of a security breach year-round.

Over the last 10 years, a few major shifts have happened. Businesses embraced digital transformation, gradually adopting cloud-based applications, software as-a-service (SaaS) and infrastructure as-a-service (IaaS). Then, the COVID-19 pandemic pushed organizations to remote work and dramatically changed the network landscape – including where data and apps were managed. Today, our hybrid, hyper-distributed world brings new challenges to security teams as more corporate data continues to be distributed, shared and stored outside of on-prem data centers, into the cloud. Despite its many benefits, work-from-anywhere exposes organizations to new vulnerabilities – new monsters – that must be slayed. The old “castle and moat” security model, which focused on protecting the data center via a corporate network, is essentially obsolete.

Enter a new security model called a Secure Access Service Edge (SASE) architecture. SASE brings together next-generation network and security solutions for better oversight and control of the IT environment in our cloud-based world. How do you enable a SASE architecture? With Security Service Edge (SSE) solutions. SSE solutions enable secure access to web, SaaS, IaaS and cloud apps for a company’s users, wherever they are. The core products within SSE are:

Secure Web Gateway (SWG) for secure web & SaaS access

Cloud Access Security Broker (CASB) for secure cloud app access

Zero Trust Network Access (ZTNA) for secure private app access (versus network access)

With data breaches becoming costlier by the minute, cloud protection is a must. Here are top 10 reasons cyber-experts see security service edge adoption as critical for an effective long-term security posture.

1. Controlling “shadow IT” – The average enterprise finds that their users are accessing over 2,400 cloud services, of which only two percent are IT-led and under full admin control. The remaining 98% are user-led and not under full IT control. That is a lot of invisible usage and data. How can you protect what you can’t see? An SSE solution brings full visibility to cloud applications in use, eliminating shadow IT.

2. Preventing data loss in the cloud – Legacy security technologies can’t see or identify when a user moves sensitive corporate data from a corporate instance of a cloud application to a personal instance of a cloud application. When you enable an SSE, your security team will be able to see that data movement and enforce policy to block or prevent data loss.

3. Enabling Zero Trust access – An SSE platform is designed to grant least-privileged access to authenticated users, ensuring each user is only able to access the corporate data they require for their role. Legacy network security models assume anyone granted access to your network is “safe.”  But if a bad actor can gain access to your corporate network, they have almost unfettered access to corporate resources and data. With a Zero Trust Network Access solution, you can avoid unnecessary network access by enabling direct access to cloud apps.

4. Stopping cloud sourced malware and threats – 50% of malware found in enterprise environments is now being downloaded out of Microsoft cloud applications. An SSE solution can decrypt, identify and block threats coming from these cloud apps into your environment.

5. Reducing impacts to user productivity – With more users being remote, IT teams are relaying (or “hairpinning”) traffic back through a corporate network via a VPN to attempt to give secure access to data sources. This slows down the time to access and negatively impacts user productivity. A Zero Trust Network Access solution eliminates the need to hairpin traffic back through the network, increasing speed of cloud application access and user productivity.

6. Providing insight into risky user behavior – Legacy security solutions don’t have the sophistication to alert you to risky user behaviors, such as sharp increases in corporate data downloads. SSE solutions map user access to contextual behavior to alert you to unusual activity with User and Entity Behavior Analytics (UEBA).

7. Blocking internet threats – Unfettered access to the internet can lead a user to introduce malware and threats into your environment by accessing risky websites. A Secure Web Gateway, an SSE solution, can block user internet access to potentially dangerous websites.

8. Controlling high IT costs and complexity for legacy solutions – Renewing contracts on multiple point products such as legacy VPNs, firewalls and secure web gateway appliances quickly adds up in IT costs and complexity. Utilizing a single SSE platform reduces costs and increases the ease of management.

9. Avoiding cloud security misconfigurations – Many data breaches are a result of misconfigured cloud infrastructures, SaaS and IaaS. Cloud security posture management (CSPM), a supplementary SSE service, can help you automatically identify and remediate these misconfigurations.

10. Eliminating data loss on external storage devices – Users are also able to download sensitive data from their endpoints onto external USB storage devices. Data Loss Prevention (DLP) solutions protect against this risk, providing greater visibility and policy control on this behavior.

As you can see, there are several monsters in the cloud. Dell has partnered with Netskope, a leader in the SSE space, to help keep our customers safe with best-in-class cloud security solutions. Speak with your Dell sales rep or visit our Endpoint Security solutions site to learn more about Netskope’s SSE solutions (including a new DLP solution available in the fall) which can better protect your enterprise.

Source: dell.com

Saturday 22 October 2022

Adapting to Climate Change

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Prep, Dell EMC Certification, Dell EMC Tutorial and Material, Cisco Guides, Dell EMC Learning

Last week I attended Climate Week in New York City, which brought together leaders across governments and businesses, policymakers, scientists and NGOs to share their unique perspectives on how to collaborate to drive climate action. Historically, emissions reduction (“mitigation”) has been a primary focus of the many events and sessions, and clearly it is still an important part of the conversation. This year, however, the topic of climate adaptation played a larger role and it had me pondering how Dell Technologies should be thinking about adaptation and the supportive role technology can play.

Climate adaptation recognizes the world is already experiencing impacts from climate change and preparing for a climate-resilient future needs to start now. It doesn’t mean we lower our ambition to drive down carbon to combat the effects of climate change; it means we need to evaluate and evolve the processes, systems, tools and behaviors which can be implemented to prepare for, or live with, those impacts more effectively. Just as technology can help individuals, organizations and entire communities engage more equitably and fully in a rapidly evolving digital future, it can also help address needs that are a part of climate adaptation.

As part of this exciting week, I represented Dell Technologies during the World Economic Forum (WEF) session to kick off the newly formed Climate Change Adaptation Community. I was able to engage with other public and private leaders about organizational resilience, opportunities to drive change and how we would define leadership. This will culminate into an initial WEF report in time for November’s COP27 in Egypt and examples underpinning the report are already in progress. Dell is currently investigating various sectors which could be positively impacted by leveraging technology to drive greater efficiency or effectiveness. The agriculture industry, for example, could adapt to more volatile weather patterns and increase their yield to feed a growing world with these interventions. Data modeling could drive more accurate future-state assessments and the development of more effective early warning systems could save lives. The possibilities are endless, but to be most effective, the exploration must begin now.

Technology can be an enabler of a low carbon transition and help with mitigation strategies as well. Innovators from Dell Technologies, IOTA Foundation, ClimateCHECK and BioE gathered in New York last week to showcase a new process which will reshape what data confidence can look like when it comes to emissions. Project Alvarium is the culmination of four years of collaboration. It represents a landmark joint effort to unify open-source and commercial trust insertion technologies in a standardized environment. It could be leveraged to enable better tracking of carbon footprints and provide data to educate organizations and communities on opportunities to reduce their environmental impact. Working like this, across different industries, will only help to increase the scale and impact we can have.

And last but certainly not least, one of the strongest commitments we can collectively make to combat climate change is to zero out emissions. Dell Technologies, as part of our commitment to advancing sustainability, set a goal to reach net zero greenhouse gas (GHG) emissions across our entire value chain –  scopes 1, 2 and 3 by 2050. We are committed to achieving our net zero goal and supporting our customers, and society, as they achieve theirs.

With continued collaboration, strategic partnerships and innovative technology we will continue to drive positive change and advance climate action for people and the planet. It will take everyone working together to scale our impact on climate change. I, for one, am here for it.

Source: dell.com

Friday 21 October 2022

Is Data Scientist Certification Worth It? Spoiler Alert: Yes

dell emc certification data science, dell emc proven professional certification program data science, dell emc data science, dell emc data science certification, dell emc data science associate, dell data science, certified data science specialist, data science certifications, emc data science, dell emc data science associate certification, dell data science interview questions, emc data science and big data analytics certification, emc certification data science, dell emc data science and big data analytics pdf, top data science certifications, data scientist advanced analytics, emc proven professional data scientist associate (emcdsa), data scientist syllabus, data scientist interview test

Most people want to learn how to use Data Scientist certification to advance and improve their careers. So, it would be best if you began considering achieving the Data Scientist certification. It would benefit if you used your expertise to determine yourself for a particular position because the IT industry is already characterized by intense competition.

People work toward earning Data Scientist certification, and your professional life will significantly improve through the certification. It is entirely up to you which certification you decide to pursue and concentrate on for success, like certification.

Various organizations offer numerous certifications, but you should only select those with market validity, such as the Data Scientist certification. Beyond typical work, their work shows these Data Scientist certified skills and abilities.

These certified skills have practical uses and can be used in various situations, such as day-to-day living, positive social impact, and getting in more money. Every business today seeks to hire Data Scientists certified individuals to improve their organization's performance.

Who Can Become a Data Scientist? Is It Suitable for You?

Data Science Certifications provide you with proof of the qualities and knowledge you have. If you have achieved excellence in the field and earned top certifications, then trust me, no one can stop you from getting hired as a data scientist.

The widespread use and presence of data and its relevance in today's digital world have given birth to the surging need for experts and professionals in data science. Data science has become an indispensable tool for all industries and businesses to gain valuable insights to amplify and improvise their operation, stand out in their field, and outperform their contemporaries.

The demand for data scientists has paved the way for graphical user interface tools that do not need expert coding knowledge. You can quickly build data processing models with a solid understanding of algorithms. However, even if you do not have strong coding knowledge and a remarkable degree in data science, you can still become a data scientist. You can be a data scientist without a degree and good learning capabilities.

New technologies allow organizations easily collect large amounts of data. But, they often do not recognize what to do with this information. Data scientists use advanced methods to assist bring value to data. They collect, organize, visualize, and analyze data to find patterns, make decisions, and solve problems.

Data Scientists require strong programming, visualization, communication, and mathematics skills. Typical job responsibilities include gathering data, creating algorithms, cleaning and validating data, and drafting reports. Nearly any organization can advantage from the contributions of a trained data scientist. Potential work sectors include healthcare, logistics, banking, and finance.

How Can Experts Help You Learn Data Science?

Thorough knowledge of NoSQL, Hadoop, and technical Python is required to be called a data scientist because you need data systems, such as Hado, plus a whole toolchain and Python if you need to apply for several data science positions. The growing pace of the industry's demand for faster analysis calls for people to recognize more about data science means that professionals can get on board and begin learning immediately.

Simultaneously, employing technical knowledge to live in a static analysis rather than manual processes is possible. Most of the time, Python and R have transitioned from being a way of research to being preferred because they have made it possible to automate most of it.

Expert subscription offers candidates the ability to keep learning new skills, resources, and technology while also ensuring that they are up to date on projects they are a member of on the team they are working on and interested in.

Data Science technology significantly influences several applications, most in the healthcare and scientific fields. As a result, massive big data analytics has become a strategic and leading focus in all companies.

End Notes

Data science credentials are not useless. Many platforms offer certificates for data science courses since it is a decent way for the students to prove that they are effectively involved with learning new expertise. Recruiters appreciate seeing candidates continually attempting to develop themselves so that credentials can help your request for employment.

However, the impact will probably be minor if there is any impact whatsoever. It is essential to select whether you can perform the job and that information they will look at primarily by assessing your project's portfolio.

Thursday 20 October 2022

Dell’s New T-shirt Take on Infrastructure Capacity Planning

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Prep, Dell EMC Certification, Dell EMC, Dell EMC Guides, Dell EMC Study

There’s an old saying: you can’t manage what you can’t measure. That is the case with Dell Digital’s new infrastructure capacity planning and forecasting effort to keep pace with our record organic growth and new business demand for IT services inside Dell.

In the fall of 2020, we began building a team to address the challenge that Dell Digital, Dell’s IT organization, needed better forecasting guidance to stay ahead of our growing infrastructure demand.

Our organic growth of existing IT systems, which normally runs around 8-10 percent year-over-year, had exceeded 41%. We also faced rapid new internal business growth from new products and an internal self-service catalog.

However, our data on existing and projected IT resource needs was spread out in separate locations across our organization with no cohesive measurement standards or analysis strategy. We relied on manual processes and spreadsheets to perform infrastructure capacity planning and forecasting.

We built a capacity forecasting team of data scientists and analysts to review data spanning our current systems, organic growth trends, business demands and future resource insights. The team not only aggregated historic, current and projected resource data from different sources into a central data repository but also created a uniform measurement model to provide better clarity for data center forecasts and transparency for users.

The result is an automated planning and forecasting model that enables Dell Digital to maintain six months of on-demand capacity and lets us keep an 18-month rolling forecast for our supply chain.

Forecasting with T-shirt Sizes 


We started creating our capacity forecasting model by looking at what our systems were doing and what we predicted they’d do down the road. We looked at organic growth over time plus internal business growth and worked with our business segments to understand what they’re looking toward doing.

The next step was converting that data into a demand forecast so that we can signal increases or decreases in capacity requirements to both manufacturing and to our interlock teams, including those that manage our facilities, power and rack space. We also strove to signal our manufacturers about what demand projected to be 18 months in advance.

As we mapped our forecasting strategy, we decided we needed a standard measurement of our infrastructure use to track current and future capacity based on how users consume IT resources rather than on individual infrastructure components. We chose an increasingly popular and friendly measurement technique based on T-shirt sizes. T-shirt sizing is a capacity planning tool in which you assign each project or task a T-shirt size—from extra small to double extra-large (XXL)—to represent its scope or scale.

For example, an extra small T-shirt might be a sandbox or proof-of-concept environment. An extra-large T-shirt would be a full-scale, full-size production series of databases for a major project.

This planning measurement approach lets us tie our forecasting strategy to how our team members consume our IT infrastructure through our self-service Dell Digital Cloud Portal. Since users are consuming our products in T-shirt sizes, it makes sense to plan our capacity in T-shirt sizes. We then take that data and use AI and ML algorithms to help us spot trends and create forecasts.

Each size takes up a defined space and has an associated cost, which is essential to staying within budget from a CapEx perspective.

By using T-shirt sizes, we can be more transparent with users about what they consume and more precise in forecasting capacity needs in our data centers. We work with internal business groups to determine how much of a certain size T-shirt they want and help them understand what they can get for their money.

A T-shirt could be any number of infrastructure products including virtual machines, containers or even functions as a service. Each of our business segments has unique ownership of our internal applications, and once we understand the behaviors of those applications, how they break down into T-shirt sizes, we can start to better understand the future.

We now standardize our multicloud experiences by leveraging T-shirt sizing across Dell Digital to enable a consistent experience spanning our private cloud and our public cloud offerings.

Sizing Up the Big Picture


Using our T-shirt model, we took quarterly and yearly historical trends, converted that into how many T-shirts we consume in each of our 27 data centers and scaled up from there.

In the process, we’re looking at everything from how much power and space we have, how many racks we need, and how much power we need going to each rack for our substrate design.

We’ve created a 15-year model around how we think we need to grow and scale our data centers. This includes new data centers coming online, older data centers spinning down and migration models for those transitions.

Overall, we’re thinking of our capacity and data center strategy from the big picture over five, 10 and 15 years.

Data Storytelling


At the very outset of our team’s forecasting work, we discovered and addressed a critical IT resource need. The data was clearly telling us a story. Analyzing data from multiple sources and looking beyond our previous incremental growth assessment, we realized that we only had about an 18-month runway before we would be constrained in our current data centers.

Armed with this new data, Dell Digital was able to spin up two new data centers within 90 days to meet our urgent demand for storage and compute capacity.

Since then, our planning and forecasting strategy has vastly improved our visibility on IT capacity needs, with a five to 10-year data center strategy, an 18-month supply chain projection and better consumption insights and metrics for our users.

Our planning goal going forward is one that any modern IT organization needs to achieve. We are seeking to balance our facilities, network and power usage and make sure that we’re building in resiliency plans. Capacity planning is not just looking at application requirements but looking at the overall health of the environment. And that could be everything from space, power and cooling to racks, software-defined storage, compute and networking.

Source: dell.com

Wednesday 19 October 2022

Oracle VMware Solution and Dell Data Protection – Better Together

Dell EMC Study, Dell EMC Prep, Dell EMC Tutorial and Materials, Dell EMC Certification, Dell EMC Prep, Dell EMC Preparation

The ancient Chinese philosopher Confucius is believed by many to have said, “Life is really simple, we just insist on making it complicated.” The same can be said for hybrid multicloud computing. Many organizations have gone through the enormous effort of refactoring their applications to work natively in the public cloud when much simpler options are available.

Take for instance Oracle Cloud VMware Solution (OCVS).

VMware virtual machines (VMs) generally operate the same way in Oracle Cloud VMware Solution as they do on-premises, typically eliminating the need to refactor applications for the cloud. The same VMware tools and operational processes are used to manage VMs wherever they reside, making management of your hybrid multicloud workloads much simpler.

But it gets better; when you pair Oracle Cloud VMware Solution with Dell multicloud data protection and security solutions, you get the operational consistency, simplicity and efficiency with a cloud deployment and the added security to help protect critical workloads across your hybrid cloud environment.

“The integration of Oracle Cloud VMware Solution and the Dell Data Protection Suite, can help customers secure critical workloads residing in the cloud or on-premises with simplicity and efficiency while optimizing their hybrid multicloud environments,” said David Hicks, group vice president, Worldwide ISV Cloud Business Development, Oracle. “We look forward to our continued work together.”

Organizations are utilizing Oracle Cloud VMware Solution with the Dell Data Protection Suite to extend their IT operations into the Oracle Cloud to enhance their business agility while ensuring the protection and security of their critical workloads and data.

In addition to delivering a consistent hybrid multicloud operational experience, Dell cloud data protection and Oracle Cloud VMware Solution have several unique capabilities worthy of mention:

◉ VMware operational control: In Oracle Cloud VMware Solution, customers have ESXi root credentials. This gives customers complete control over VMware upgrades in their cloud environment and enables them to use the same VMware tools for managing VMware provisioning, storage and lifecycle policies in the cloud as they do on-premises. This level of control helps ensure operational consistency across hybrid cloud environments.

◉ Cloud-efficient protection: In addition to delivering direct integration with VMware vSphere environments to auto-detect and auto-protect VMware workloads (VMs and Kubernetes containers), Dell multicloud data protection solutions deliver efficient ways to protect data on-premises and add to the built-in security in Oracle Cloud with data deduplication that reduces the physical storage footprint required to store and archive critical data. When combined with application direct technologies, organizations can also minimize the payload of data transfers to and from the cloud as well as between cloud regions – helping to reduce cloud egress costs.

◉ Database adjacent services: Once your data is in Oracle Cloud VMware Solution, all OCI cloud services, including OCI’s database services, are immediately adjacent to your VMware workloads in the Oracle Cloud. This makes it easy for database administrators and developers to leverage database services to speed up application development, run fast queries to support on-demand data analytics, or to test DR recovery capabilities using solutions like Dell PowerProtect Data Manager to orchestrate recoveries in the cloud.

◉ Manage and protect traditional and modern workloads: Oracle Cloud VMware Solution supports VMware Tanzu Standard Edition, giving organizations a consistent way to deploy and manage Kubernetes containers across hybrid cloud environments. Likewise, Dell PowerProtect Data Manager enables organizations to automate the protection of Kubernetes containers in Oracle Cloud VMware Solution environments to help protect containers and VMs anywhere they reside. In addition, Data Manager provides self-service capabilities through the UI and open APIs to reduce IT administrative overhead and empower key end-users like developers to protect and recover their own workloads.

Oracle Cloud VMware Solution and Dell multicloud data protection and security solutions give organizations an easy way to extend their on-premises production VMware workloads into the Oracle Cloud without the added complexity of application refactoring or the need to use a completely different set of management and data security tools to address application SLAs. This potent combination helps simplify IT and delivers the flexibility organizations need to stay agile in the data era.

Source: dell.com

Tuesday 18 October 2022

PowerEdge XR4000: Compute Optimized for the Edge

PowerEdge XR4000, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Tutorial and Materials, Dell EMC Prep, Dell EMC Preparation

The announcement of the Dell PowerEdge XR4000 marks a continuation of the Dell Technologies commitment to creating platforms for the Edge. These XR servers can withstand the unpredictable and often challenging deployment environments found in non-data-center locations. Whether it’s in the space-constrained back-offices of your retail locations, or directly next to heavy machinery on your dusty manufacturing floors, Dell’s Edge servers deliver the flexible compute capabilities our customers need.

Dell’s Shortest Depth Edge Server


This high-performance multi-node server was purpose-built to address the demands of today’s retail, manufacturing, and defense customers.

The XR4000 is designed around a unique chassis and compute sled(s) concept. The chassis consists of two 14”-depth chassis form factors, referred to as “rackable” and “stackable.” The actual compute resides in modular sleds coming in 1U or 2U form factors; with power being the only shared component between the sleds.

Two Chassis + Two Sleds = Ultimate Flexibility


PowerEdge XR4000, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Tutorial and Materials, Dell EMC Prep, Dell EMC Preparation
Front and rear views of the Dell PowerEdge XR4000 “rackable” (left) and “stackable” (right) Edge Optimized Servers.

The “rackable” chassis supports 4x 1U, 2x 2U, or 2x 1U + 1x 2U sled configurations and fits into a standard width rack, allowing for deployment in locations with existing compute infrastructure. The “stackable” chassis is also 14” (355mm) deep, but only 12” wide and can be deployed on a desktop or stacked on a shelf with its innovative built-in latches. Adding even more function, both chassis are wall-mountable, allowing for ultimate flexibility where floorspace comes at a premium. Both chassis designs also support single-side I/O, allowing for all I/O (including power) to be accessed at the same time.

Powered by Intel


At the heart of the XR4000 compute sleds is the Intel® Xeon D® Scalable processor. This ‘made-for-the-edge’ CPU comes with up to 20 cores meaning a rackable chassis can be deployed with up to 80 total cores. The sleds also include 4x memory slots; 3200MT/s; up to 128GB (coming soon after launch,) and 4x M.2 storage capacity as well as a separate BOSS card for OS partitioning. The 2U sled has the same features as the 1U sled, while adding 2 x Gen4 PCIe slots to support GPUs, DPUs, and other NIC Options.

Introducing the Nano Witness Server


XR4000 delivers even more innovation with the addition of an optional nano server sled. Replacing the need for a virtual witness node, the Nano Server can function as an in-chassis witness node, allowing for native, 2-node vSAN cluster in even the 14” x 12” stackable server chassis. This allows for virtual machine deployments where the option was previously out-of-the-question due to latency or bandwidth constraints.

Designed for the Unpredictable Edge


As a MIL/NEBS tested server, XR4000 can operate in temperatures from -5C to 55C, withstanding most levels of shock and vibration, and handle the extremes of remote field deployments or Black Friday shopping crowds.

Additionally, both chassis types have the option to include a lockable bezel, with intelligent filter monitoring that will alert the Integrated Dell Remote Access Controller (iDRAC) when the filter needs to be changed, keeping your server free of the dust, pollen, and other air particles found in most commercial locations.

Flexibility, Scalability and Serviceability


The PowerEdge XR4000, Dell’s shortest depth server to date is purpose-built for the edge, delivering high-performance compute and ultimate deployment flexibility. This multi-node, two-chassis server can manage unpredictable conditions while saving much-needed floor space.  Its unique design allows you to start with one server and scale for additional capacity. Built to survive outside of the data center, it is an attractive option to deploy on the manufacturing floor or in a retail back office.

Also Offered to Our OEMs and Partners


For OEMs, from bezel to BIOS to packaging, your servers can look and feel as if they were designed and built by you. For more information, visit our OEM solutions site. For Partners, it’s available through our extensive partner network which includes implementation and managed services.

A new addition to the edge portfolio, PowerEdge XR4000 is deployed as a part of the Dell Validated Designs for Manufacturing Edge and Retail Edge. To learn more about the PowerEdge XR4000, visit the PowerEdge servers page.

Source: dell.com

Monday 17 October 2022

Optimism, Vision and APEX at Dell Tech Summit

Dell EMC Study, Dell EMC Career, Dell EMC Prep, Dell EMC Skills, Dell EMC Jobs, Dell EMC APEX, Dell EMC Tutorial and Material

Michael Dell and the Dell Technologies leadership team are confident the strategies they’ve developed, including the direction of its APEX portfolio, will allow the company to thrive amid economic headwinds without taking its eyes off its mission to develop a seamless, outcome-focused technology experience for customers.

In fact, these two key topics discussed by Dell, Vice Chairman and Co-COO Jeff Clarke, Co-COO Chuck Whitten and CMO Allison Dew during Wednesday’s Dell Technologies Summit broadcast go hand-in-hand.

“It’s not our first rodeo,” Dell said, noting that his namesake firm’s supply chain, global support capabilities, as well deep customer and partner relationships, have helped the company weather periods of economic turmoil throughout the 38 years since he founded it in his University of Texas dorm room. “We know how to emerge stronger from this,” he said.

Today, with its growing APEX portfolio and a focus on multicloud ecosystems that link core, cloud and edge environments, Dell is deepening its customer relationships as those customers focus more on outcomes and “simple ways to more quickly drive more transformation inside their businesses,” Dell said.

And while economic headwinds are obvious to all, the tailwinds provided by customers’ desire to do new and exciting things with the vast amounts of data they’re producing has Dell Technologies focused both on helping customers weather the storm today and on playing a leadership role in establishing a simpler, healthier, safer and more successful world in the coming decades.

“Anything you want to do that’s interesting or new or exciting in the world,” Dell said. “Data is the common denominator, and that is an enormous tailwind for our continued growth as a business.”

The company refined its multicloud strategy and launched a slew of new and updated storage products at Dell Technologies World in May, and Wednesday announced Project Frontier, an edge operations software platform that helps customers simplify, optimize and scale edge applications in a secure way, as well as expanded Microsoft Azure Stack HCI solutions.

At the center of the company’s efforts now and into the future is “APEX, APEX, APEX,” Dell said. Dell Technologies has steadily built more and more capabilities into its multicloud as-a-service APEX portfolio, from public and private cloud to cyber recovery and backup services.

Amid frequent and costly cyberattacks, customers are looking for help with security, and Dell said the company’s mission is to make security easier for customers.

“We’ve been building security throughout our products and supply chain for decades, but now we’re stepping it up with Dell trusted devices and trusted infrastructure solutions to make it easier for customers to adopt Zero Trust practices throughout their environment,” Dell said, highlighting cyber resiliency assessment services, and services to manage, detect and respond to threats, as well as Dell’s flagship Cyber Vault, which is available through APEX.

“At the end of the day, our roadmap is defined by customer requirements,” Dell said. “More and more of our customers want to drive to a flexible consumption model and an outcomes-based approach”

Whitten said simplicity and consistency are key for customers: “Ultimately, what customers are asking us for is a simple, consistent cloud experience across their multicloud, multi-edge, multi-data center environments,” he said. “And they want to be able to consume infrastructure multiple ways.” He noted that customers tend to move from buying infrastructure to subscribing to it and then toward having it all managed for them by Dell.

The flexibility and simplicity of APEX solutions can help customers reach long-term goals, and can also play a role in helping them through what may be a rocky economic period in the short term.

“Customers are looking for ways to dedicate more of their spend and more of their energy to the things that actually drive differentiation for them,” Dell said. “Whether it is pre-configured appliances or APEX cloud services or other consumption-type models, all that is helping them along that same path. Some of the discussions have shifted from growth to cost given the economic challenges out there, so customers are even more focused on, ‘How do I point the limited resources I have to the challenges that really make a difference?’”

Long-term, Dell Technologies leaders see APEX as much more than a way to buy the company’s products and services. They see APEX as an overarching problem-solver “across a customer’s estate,” Clarke said. “It is our multicloud answer and the underlying architecture of what we’re building in the company.”

For Clarke, multicloud is the aggregation of many clouds in a way that allows them to work as a single system. With software to manage things like Zero Trust security at scale, services, applications and automation, the company can apply the same technology concepts across clouds to solve customer problems from remote mines to community hospitals.

Customer relationships make these things possible, Dew said. “We start with our products, with reliability, support, an emphasis on security and then carry that through to our emphasis on who we are around ethics, trust, privacy and really making sure those parts of our ESG [Environmental, Social and Governance] goals are really foundational to who we are,” she said. “We help our customers weather this really complex environment, and we continue to be a company our customers and partners are proud to do business with.”

Source: dell.com

Saturday 15 October 2022

Unlocking the Value in E-waste

Dell EMC Study, Dell EMC Prep, Dell EMC Certification, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Skills

Today is International E-Waste Day. This year’s theme – “recycle it all, no matter how small” – reminds us that discarded, unused electronics, or e-waste, present one of the fastest-growing global environmental challenges of our time.

It is estimated that more than 57 million tons of electronics will be discarded this year. This is equivalent in weight to 82,000 school buses or 4,700 Eiffel Towers – enough to cover the size of Manhattan – and that’s just e-waste production in a single year. Only 17.4 percent of that volume is recycled as e-waste each year.

When returned for reuse or recycling, end-of-life electronics contain valuable, reusable components, parts and minerals that can be responsibly harvested for other uses. The carbon footprint of electronics shrinks when components and materials are reused because we extend their life. And for every pound of steel, aluminum, plastic or copper we recover for reuse, we save a pound of material from being newly manufactured or extracted from the ground.

End-of-life electronics returned through our Dell Technologies recovery and recycling services are given a second chance. We extend their usable life and accelerate the circular economy. In fact, we have recovered more than 2.6 billion pounds of used electronics since 2007.

To avoid turning end-of-life electronics into e-waste, we work to unlock their value:

1. Design for circularity – we imbed circular principles into every aspect of product design.

2. Repairability – We make it easy for consumers to repair a device by providing product manuals online, offering services like our Dell AR Assistant, and designing for better repairability. The longer we keep our electronics in use, the greater the impact.

3. Take back services – We provide convenient services to recover and recycle end-of-life devices when the technology no longer meets a user’s needs.

4. Maximize reuse – Once a device is returned, we maximize its reuse potential by taking the following steps:

◉ Sanitize and secure data*
◉ Refurbish systems that can be resold or donated for continued use
◉ Harvest all usable parts to extend the lifecycle
◉ Extract key materials – like plastics, magnets and aluminum – to reuse in new Dell products
◉ Responsibly recycle all other materials

We understand the value of legacy electronics – both for our commitment to circularity and for the health of the planet. In fact, we have set an ambitious goal to tackle this challenge, which is by 2030, for every product a customer buys, we will reuse or recycle an equivalent product.

In addition to our existing recycling services, we continue to find innovative new ways to make it easier for people and businesses to return their used electronics. In the last year, we launched pilot programs to raise awareness about the importance of electronics recycling and to drive people to act:

◉ We reached consumers who purchased certain laptop models with an on-package recycling message encouraging them to reuse the box to return their old equipment.

◉ We tested an innovative service that uses delivery lockers in apartment buildings. This campaign encouraged apartment dwellers to deposit unwanted electronics in shipping lockers for recycling.

◉ We joined forces with technology peers to pilot a curbside recycling program for consumers in Denver, Colorado.

◉ And, for business customers of all sizes, we modernized Dell’s Asset Recovery Services globally – now supporting 36 countries and available through our channel partners.

We established our global recycling services more than 25 years ago and we continue to evolve to keep pace with changing consumer and business demands. We are driving innovation to increase the volume of products from all brands, not just Dell, for refurbishment, reuse and recycling.

Help us put a dent in e-waste by trading in or recycling your end-of-life device today. Visit Dell’s Recycling Solutions page for more information and to learn how. We’ll take it all – no matter how small – as we continue to unlock the value in e-waste.

* Dell Technologies does not accept liability for lost or confidential data or software when recycling through the Dell Reconnect program. You are responsible for backing up any valuable information and erasing sensitive data from the hard drive before returning to Dell. To completely erase the hard drive, there are a number of free services available online.

Source: dell.com

Friday 14 October 2022

Get Your Edge Together – with Project Frontier

Dell EMC Study, Dell EMC Prep, Dell EMC Career, Dell EMC Skills, Dell EMC Tutorial and Materials, Dell EMC Edge

There are many ways to characterize edge computing. But boring is not one of them.

I believe we are at an inflection point where edge computing technologies will impact every aspect of our business and personal lives. We know that edge computing is not new. At Dell Technologies, we have been helping customers be successful at the edge for over 20 years. In fact, over 81 percent of Fortune 100 companies use Dell for their edge solutions. In countless conversations with customers, we have witnessed an acceleration of edge deployments and the increased challenges associated with them. To maximize the vast potential of the edge and address these challenges, enterprises will need to get their edge together. This is where Project Frontier comes into play.

What’s the Hype All About?


The acceleration of innovation in applications, technology and data has created a perfect storm.

The reduced costs of sensors, compute and storage have enabled massive growth in data and new types of data. Concurrently, advancements in AI/ML, small form-factor computing, low-latency 5G networking and software-defined “everything” helps us capture, curate and analyze data at the edge and act faster than ever. In addition, multicloud maturity enables tremendous flexibility and scalability. When you add those capabilities to organizational needs for improved productivity on the one hand and to differentiate on the other, it all comes together.

Unique Challenges at the Edge


Unlike traditional IT infrastructure, the business requirements at the edge are varied and constrained due to the distributed nature of the edge. Thus, the requirements are fundamentally different in scale and scope from traditional data centers or the cloud.

The diversity of hardware, levels of ruggedization, constrained spaces and environmental conditions such as extreme temperature ranges and harsh environments, to name a few, create the antithesis of the control you have in a data center. Hardware is connected to critical machinery and supports the Operational Technology workloads and diverse protocols, which increases the complexity of operations. Further, edge infrastructure is typically distributed across many sites and geographically dispersed, which leads to complexity in connectivity, maintenance and support. Compounding these challenges is the fact that qualified IT is not usually present.

With enterprises organically adding edge solutions as use cases present themselves, we have seen a proliferation of solution silos at the edge. This siloed approach creates management complexity, making it almost impossible to achieve economies of scale. Critical concerns over secure operations are exacerbated by the distributed, heterogeneous nature of the edge and increased attack surface for malicious actors. As a result, a breach can have impacts ranging from competitiveness, profitability and human safety.

Require Fresh Approaches to Solving Them


Our discussions with customers have led us to understand that the challenges faced even among diverse industries and businesses are incredibly consistent and that we are well positioned to help them overcome those challenges. To do that, we must re-imagine edge operations within the unique set of constraints at the edge and design around them.

Imagine a solution that works without skilled IT onsite. One that assumes zero trust and could work with limited or no connectivity. A solution with the flexibility to start small with the capability to grow to a massive scale. A solution that embraces multicloud applications running at the edge, enabling access to Dell’s edge ecosystem and partners so we could innovate together.

Introducing Project Frontier, an Edge Operations Software Platform


Today we are proud to announce Project Frontier, our initiative to deliver an edge operations software platform with the goal of unifying edge operations across infrastructure and applications across any industry. Project Frontier is a fresh approach. It thoroughly addresses the unique, complex nature of the edge by re-imagining a better way to do edge operations.

With Project Frontier, customers can:

◉ Simplify their edge operations at scale
◉ Optimize their edge investments
◉ Secure their edge estate with zero trust security

Dell EMC Study, Dell EMC Prep, Dell EMC Career, Dell EMC Skills, Dell EMC Tutorial and Materials, Dell EMC Edge

Project Frontier helps customers by delivering an Edge Secure Environment with the ability to orchestrate applications and manage infrastructure at scale.

The Edge Secure Environment is unique because we’re able to provide a secure and scalable operating environment to run application workloads at the edge. Built on existing, qualified Dell edge hardware ranging from gateways to commercial PCs to servers that are optimized for the platform, you can consolidate your applications by hosting virtual machines (VMs) and containers right at the edge.

Dell is one of the only companies that can ensure a secure supply chain that employs tamper detection to immutable hardware roots of trust and built with cyber-resilient security. Project Frontier’s infrastructure management manages all the devices on the customer’s edge estate remotely, on-premises or in a private cloud and throughout their entire lifecycle. And the “magic” happens in the application orchestration, which can deploy home-grown or off-the-shelf software applications to the edge, data center and cloud using industry-standard templates to define blueprints.

Project Frontier can also be integrated by OEM customers for securing and managing their end-customers’ existing and new platforms while leveraging new offers like the new rugged PowerEdge XR400 server. 

Edge and Project Frontier 


Edge computing is a strategic initiative that businesses across almost every industry can use to gain a competitive advantage. Project Frontier will deliver a horizontal edge operations software platform, developed to help customers achieve their goals to simplify operations, optimize edge investments and secure the edge.

I sincerely believe edge computing represents the future of enterprise technology. Our re-imagined approach to edge operations will unleash the innovations at the edge that enterprises have been striving to deploy. I am very excited about how we at Dell Technologies can make a positive impact in this new frontier.

Source: dell.com

Thursday 13 October 2022

Enterprise Modernization: From Vertical Silos to an Innovation Platform

Dell EMC, Dell EMC Study, Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Jobs

Small, medium and large enterprises have been captives of vertically integrated solutions at the edge for a long time. The organizational silos and the much-talked-about lack of alignment between operational technology (OT) and information technology (IT) are often the reasons for such an inconsistent short-term approach.

Enterprise modernization relies heavily on edge-premise solutions to deploy private wireless and intelligent data management. Security, time to value and cost-effectiveness cannot be achieved if enterprises need to build a new stack from scratch for every new service they launch.

The most advanced enterprises are quickly moving from a short-term isolated and verticalized OT deployment environment toward a more consistent and long-term platform for innovation. They understand the modernization process as a continuous journey and look for the best combined total cost of ownership (TCO).

Opportunities and Challenges


Enterprises that want to modernize their business need to ensure the networks connecting their operational processes are robust and flexible enough to support an ever-changing ecosystem of operational innovation. Modern enterprise processes heavily use AI/ML and distributed ledger technologies. The underlying platform needs to be able to provide the upper layer applications with enough computing, storage and connectivity resources. It is also required to offer flexible network operations; IT teams change device access point names (APNs), move users between networks and activate and change privileges based on needs that emerge in real-time.

The challenge is that in mission-critical environments, waiting to implement the needed configurations is not an option. Operators need to abstract that complexity and allow enterprise customers to easily manage network changes on their terms and timelines.

Why Do Enterprises Need a Horizontal Network to Deploy New Services?


◉ Self-Services – Enterprises need control and agility to manage the day-to-day network configurations without depending on external providers’ lead time.

◉ Enterprises want multi-access – They need the diversity of multiple ecosystems, including 4G, 5G and Wi-Fi across the production environment. Deploying a single horizontal connectivity layer will enable them to explore the vast device ecosystem of 4G, the low latency and high speeds of 5G, or leverage Wi-Fi to steer the less critical traffic.

◉ Simple deployment and operations – Edge applications and new use cases must be centrally onboarded, distributed, and managed throughout the life cycle. One single orchestration engine integrated into the edge platform can significantly simplify this process. A consolidated operations and management platform (OMP) from which OT and IT personnel can have complete control without navigating through a maze of separate complex systems.

◉ Security Management – A common horizontal infrastructure allows corporate security policies to be defined and distributed from a centralized point updating the different layers of the network. Managing and orchestrating edge hardware and software in a coordinated way reduces security exposure and complexity.

◉ Private/Public Roaming – The most relevant enterprise use cases get even better when used at different sites or from a public network. An underlying horizontal innovation platform can offer a seamless end-user experience, without requiring manual intervention, for example, to select SIM cards and service profile privileges.

Dell EMC, Dell EMC Study, Dell EMC Preparation, Dell EMC Career, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Jobs

Dell Technologies have been innovating at the edge with open disaggregated platforms and services to enable our Telecom and Enterprise customers to modernize operations and generate new outcomes. We partner with industry-leading companies to compose end-to-end solutions to reduce complexity and accelerate time to value. At Dell Technologies, we enable Communication Service Providers (CSPs) to deliver enterprises with dedicated networks tailored for performance and flexibility. More information and resources on this topic are available on our Infohub page.

Source: dell.com

Wednesday 12 October 2022

Zero Touch Provisioning is Essential to Meet Infrastructure Demand

Dell EMC Study, Dell EMC Career, Dell EMC Prep, Dell EMC Preparation, Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Learning

Every IT organization is on a mission to get their infrastructure logistics out of the way of their developers with on-demand provisioning. We want to offer seemingly endless capacity to on-prem private cloud users, echoing their public cloud experience. Our goal is to drive adoption and lower costs. At Dell Digital, Dell’s IT organization, we took on end-to-end hardware automation through Zero Touch Provisioning (ZTP) to keep pace with our relentless capacity demand while delivering reliable, scalable on-prem private cloud.


The largest investment in an on-prem private cloud is always going to be compute and storage hardware. Depending on the scale, it will require hundreds or even thousands of servers that will all need to be racked, cabled, tested, configured and installed as virtualization clusters. The operational effort to turn servers into clusters by hand, step by step, does not scale. The more repetitive a task is, the higher the risk of human shortcuts and oversights. These can impact the reliability of your on-prem private cloud for years. Your hardware is the foundation, and your on-prem private cloud can only ever be as scalable and reliable as that foundation.

As the Director of Zero Touch Engineering for Dell Digital, I built a team to address the efficiency, scalability and reliability of our cluster build process. We collaborated with the infrastructure teams to standardize and automate the core steps of the provisioning process. We now configure new blank servers into standard virtualization clusters, test and validate all functions, and enable them as capacity in a quarter of the time we did previously.

This shrinks the window where the hardware is depreciating while sitting idle in the warehouse and gets capacity to our users much more quickly.

If you continue to do infrastructure hardware deployments manually, you will never achieve economies of scale to meet that capacity demand curve. Standardizing and automating not only makes provisioning faster and more efficient, it also establishes detailed knowledge about the hardware in your environment to provide a basis for day-two operations to maintain your ecosystem going forward.

Transforming Manual Processes


When we kicked off the Zero Touch Provisioning effort two years ago, nine infrastructure teams were involved in the day-to-day effort of building clusters. Many of the construction steps required physically or virtually touching all servers in each cluster to complete the same set of tasks over and over, then moving on to the next cluster. The overall process was high friction and required daily calls and multiple full-time project managers to push clusters through the pipeline from team to team.

What’s more, our Dell Digital organization tripled our hardware asset spending over the previous five years, tripling our deployment burden.

To address this challenge, we first built our ZTP team by bringing in engineers skilled in microservice architecture, workflow management, automation frameworks and front-end design. We identified and clarified existing standards, helped to resolve discrepancies and close any standards gaps. We discovered and added new steps to the provisioning process to increase validation for reliability.

Today, there are 25 steps in our workflow, each with clearly established inputs that are structured and encoded in our ZTP database. Each step replaces dozens of tasks that were previously manual.

Automating a Step at a Time


Like the proverbial elephant, we tackled the end-to-end automation one bite at a time. We delivered steps to the build process as soon as they were ready, rather than waiting for the end-to-end workflow to be complete. By actively participating in the existing process during the shift to automation, we stayed in close collaboration with the specialized infrastructure teams while also delivering value quarter over quarter.

The automation process is ongoing. We consult infrastructure subject matter experts to analyze each deployment step in detail. The ZTP team then selects the appropriate match from our toolkit to tackle that integration. To automate installation and management of commercial or open-source tools, we start with open-source automation modules as much as possible. When needed, we develop our own integration libraries, always focusing on reusable components.

Automating the actual execution of build tasks to complete each step is about one quarter of the overall effort. We had to develop a system to encode all the required information – the DNA of Zero Touch Provisioning. Nearly 10,000 pieces of information need to be collected, captured or tracked about every cluster for it to flow through the steps. We compile and structure information about all layers of the platform and serve it via APIs, so it is self-service and a shared source of truth for automation and validation processes. This eliminates the friction from meetings, handovers, reliance on institutional knowledge and the endless flow of spreadsheets.

Finally, after the tasks are automated and the data is structured, the step can be integrated with the overall workflow so that it is truly automated enough to ‘run itself.’

Realizing the benefits of ZTP takes time. Initially your organization is trading infrastructure engineers for software engineers to build out the automation. But as your on-prem private cloud expands, the speed, efficiency and maintainability benefits will accumulate.

We heavily invested in ZTP to develop the end-to-end workflow that gets us to day one of production – the first day that a cluster goes live as capacity in our on-prem private cloud. However, this diligently structured and standardized design and build gives us the capability to continually audit, maintain and eventually decommission the capacity clusters.

The demand for capacity only continues to increase. Development teams continue innovating and growing their user bases. Internal tools and services grow along with your business. ZTP is an investment that will continue to pay dividends in the efficiency and reliability of your infrastructure capacity.

Source: dell.com