Saturday, 29 August 2020

How Design Features in Our Program Management Services

Dell EMC Study, Dell EMC Guides, Dell EMC Learning, Dell EMC Certification

I’ve worked at Dell Technologies for over 17 years, half of which I’ve spent in Design Solutions, and I love what I do. I’m especially inspired by the amazing work of our customers on the front lines responding to some of the world’s most complex problems; everything from diagnosing and curing diseases to reducing the cost of energy or decreasing factory downtime. As we help our customers troubleshoot and solve challenges, our IP becomes embedded with theirs and we become part of their solution. However, the technology is only half the story – behind the scenes, our program management services also play an important role in bringing many of these innovative solutions to market.

For example, say you’ve developed new software that will automate drone delivery services. Knowing that this will revolutionize the industry, you want to market and sell the solution globally as quickly as possible. While you know it would make sense to package your IP with hardware in order to deliver an integrated turnkey solution, the problem is you’re not in the hardware business. And there are other challenges. How can you scale to meet demand? How can you ensure your product is tested and certified to work in different markets? How do you manage support globally? What about supply chain and logistics? How do you manage the entire end-to-end process?

The burning question


That’s when most people realize that turning a great idea into a market-ready solution is easier said than done. The reality is that these projects are complex and need expert, multi-disciplinary teams that are hard to hire, expensive to retain and take time to manage. You’re not alone. Did you know that according to Accenture, only 30 percent of executives are very satisfied with their ability to convert ideas into market-ready products or services?

Now, here’s the burning question – do you want to hire ten operations people, or would you rather invest in developers for your own unique IP? I think I know the answer! To solve these pain points and accelerate your route to market and revenue, we provide a range of dedicated program management services.

We’ve got your back


An expert project team is dedicated to bringing your vision to life, ironing out any issues you encounter and simplifying the complex. An experienced Project Manager – your key liaison – meets with you to get under the hood of your business, understand your goals, the full breadth of requirements, and where you need help. This is your champion and your go-to person, responsible for bringing your innovative design to market as quickly and cost effectively as possible. At the outset, the Project Manager assembles the right team for your project and sets up an initial meeting to scope the parameters of the work. Once the project is defined and approved, he/she is responsible for driving progress, reviewing timelines, ensuring milestones are met, removing any obstacles to success, avoiding scope creep, and evaluating costs, effectively representing your interests on the go-to-market team.

Your Project Manager has extensive knowledge of and access to wide-ranging Dell Technologies’ resources, including hundreds of facilities, global laboratories, thousands of human resources with expertise in marketing, sales, engineering, customization, design, export and compliance, validation, inventory holdings, shipping logistics, award-winning support, and importantly, a support global supply chain with a multi-billion dollar procurement engine, which includes insulation from memory or SDD shortages as well as economies of scale.

Your reputation is everything


Dell EMC Study, Dell EMC Guides, Dell EMC Learning, Dell EMC Certification
We’re with you every step of the way from design to product development right until the moment your solution is delivered into your hands. Once your solution is shipping in volume, the Project Manager stays in touch, in case your requirements change at any stage. When you transition to a new product or next generation solution, we’re by your side to ensure that what is being built is tried, tested and true, and your quality standards are fully maintained. After all, we know that the quality of the solution reflects on your company’s reputation and bottom line. Let’s look at some real-life customer examples.

Faster delivery times


According to Vectra, our program management services reduced solution delivery time from six weeks to one. In the words of Greg Roche, Vectra Director of Operations, “We receive our components faster, so we can start building appliances and get them to customers sooner. We can standardize three different versions on one system without needing to keep all that inventory ourselves. That gives us flexibility and helps us keep costs down.”

Accelerating deployment


GE Healthcare was migrating a system for medical images to a modular infrastructure, thereby increasing order fulfillment speed by 50 percent. According to Katsuji Nakanishi, Commercial Solution Lead, Radiology and Healthcare IT Departments, GE Healthcare Japan, “We had a close working relationship with a single team for the duration of this whole project. I believe this was key to making the project a success…Quotes were delivered on time and the team gave us detailed information on shipping and delivery schedules that other vendors just could not give us.”

Design is part of the overall experience


The moral of the story is that design is not just about the technology solution; instead, it’s part of the entire end-to-end experience. Our customer mindset is, how can we help you solve problems? How can we design a process to help you scale and grow your revenue?

Source: dellemc.com

Thursday, 27 August 2020

5G, IoT and Security: Protecting Emerging Technology

5G, IoT and Security, Dell EMC Exam Prep, Dell EMC Guides, Dell EMC Learning

With the rise of emerging technology, unforeseen security challenges can appear. As 5G becomes ubiquitous, it’s the machines that need to be protected from human beings. That’s because cybercriminals, hacktivists and industrial spies have set their sights on IoT devices as a massive attack surface for denial-of-service (DoS) strikes, data theft and even global disruption.

If you’re a communications service provider reading this, maybe you’re thinking “I’m glad that I’m not responsible for securing all those IoT devices.” But you are. If service providers wish to monetize IoT communications, they’ll need to wrap security around those communications. It’s a big task, compounded by the fact that most IoT devices will be so small that they’ll have no built-in security of their own. The stakes for service providers, however, are too high to ignore: personal data, mission-critical applications and even national security are all at risk from IoT-based attacks.

Okay, now take a deep breath: You don’t need to solve all these problems today — the IoT revolution isn’t here yet. But you do need to be thinking about IoT security right now, studying the potential attack surface of new applications (e.g., telehealth services, connected cars) and developing strategies to mitigate the unknown unknowns that will invariably arise as new IoT applications are created and launched.

What will this new attack surface look like? Let’s dig deeper into a few high-profile IoT applications to understand the potential security risks.

eHealth


Telehealth use has taken off in 2020, but it was already becoming a popular alternative to in-person healthcare, particularly in areas where healthcare services weren’t readily available. One risk of telehealth, however, is the transmission of highly personal information that could be subjected to a man-in-the-middle attack. This risk becomes even more serious when you consider the number of connected medical devices that are expected to be activated on 5G networks. For example, what happens when a remote heart monitor is compromised or real-time emergency services are disrupted by a DoS attack? And who underwrites that risk: the communications provider, the healthcare provider or the device manufacturer?

Energy providers


It goes without saying that energy services are mission-critical applications. One of the more interesting 5G applications is the use of connected devices to manage smart grids, power plants and municipal energy services such as water and electricity. But what happens if cybercriminals seize control of wireless water meters? Or if a regional smart grid is disrupted? As for safety sensors in nuclear power plants that might manage heating and cooling—well, let’s not even go there.

Those scenarios may sound unlikely, but attacks like that have already happened and been highly successful. The Mirai botnet is a classic example. It compromised a massive field of 4G IoT devices that nearly brought down the Internet. An interesting caveat: that malicious code was quickly shared on the Internet for other cyber criminals as well. Yes, cybercrime as a service is a now a thing, and a lucrative one at that.

Connected vehicles


The concept of an Internet-connected vehicle may seem futuristic, but almost every modern vehicle is already a connected device. There are GPS connections, digital satellite radio connections, roadside service connections and collision radar connections. Then there are Bluetooth connections to our smartphones, which are themselves connected to a radio access network. And that’s before we even get into self-driving vehicles.

Beyond the safety risks of turning our car into a two-ton IoT device, personal data is also at risk in our car. We can log on the Internet right now and track where our family members are located through their GPS device. Going forward, in-car email and streaming video will be packaged with cars for a monthly fee, creating an even greater need for secure, encrypted communications. When vehicle-to-vehicle communications arrive, new security mechanisms will need to be put in place for that too.

Ultimately, service providers will need to extend their view of security to not only address subscribers but the millions of connected devices that ride alongside their network in massive Machine Type Communications (mMTC) slices or support enterprise applications at the network’s edge. This will require the ability weigh risk appetite against opportunity, anticipate the unknown and react to new threats in real time. In other words, 5G will be a very different world for service providers from a security perspective.

Tuesday, 25 August 2020

Dell EMC PowerEdge Brings Flexibility & Freedom To Your Workloads

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Exam Prep

Most IT professionals know too well the balancing act of wanting to implement cutting-edge, modern solutions while still maintaining control of the overall IT landscape. Your IT organization is committed to supporting the workloads that the business depends on for success – whatever that takes. But ideally, whatever new deployment your critical workloads demand won’t chip away at the control over your IT environment that your team has worked hard to establish.

Those of you who have read our past blogs on optimizing your organization’s infrastructure for critical workloads have heard us go through some of the benefits of modernizing your infrastructure with an eye towards end-to-end optimization. We’ve explored how workload placement strategy requires continued attention over time, and how a hybrid approach to IT will set your organization up with the flexibility that needed as applications mature and new demands come up. But you may still find yourself asking how you can best leverage a variety of options in your IT purchasing – without losing control. Is there a way to engage in that freedom of choice, without sacrificing your command over your organization’s IT landscape?

One thing that all IT organizations have in common – no matter the size, the location or the industry – is this desire to avoid making a choice that pigeonholes your future decision making or creates a siloed IT experience. Whenever a new workload is prioritized, IT should not have to upend long-term strategies to support it. But when looking at your deployment options, choice with control may seem out of reach.

We hear from customers across the global that they see Dell Technologies as a unique technology provider for their organizations because we can scratch this exact itch. Folks keep turning to our portfolio time and time again because our wide range of servers, data storage and data protection systems given them the flexibility that their infrastructure decisions require when needs change. Dell Technologies can provide you with that range of choice you desire, without disrupting on the overall IT vision your organization has worked hard to support.

As applications mature and needs change, your team will be best prepared to react if they have a flexible and secure infrastructure behind them. Take our extensive line of Dell EMC PowerEdge servers as a prime specimen. The PowerEdge platform can help you establish the right hybrid infrastructure for your IT organization, providing you with the choice and the control that you need as you determine the best placement options for each of your critical business workloads over time.

Drive workload and analytics success with modern PowerEdge servers that allow your team to optimize the overall infrastructure by running the same capacity and performance on fewer servers, freeing resources to tackle application transformations. Our leading servers run your critical workloads with ease, so that you can keep the focus on supporting pressing asks from the business and helping to drive your overall organization forward. Configure to meet your needs and meet workloads where they need to be.

Whatever workload is the priority for your organization, PowerEdge allows you to tailor the experience for your IT team. With one-click BIOS tuning, we enable quick-and-easy deployment via workload-optimized server configuration profiles. This allows you to deploy and run without delay – so that your IT staff can support requests from the business without interruption. From edge – to core – to cloud, PowerEdge can support even your most advanced workloads needs with a familiar experience across them all. Learn how one company found the ideal IT infrastructure solution with the Dell EMC PowerEdge MX platform and OpenManage systems management:


All of our advanced offerings – including the advanced Dell EMC PowerEdge platform – are available with flexible payment options through Dell Technologies On Demand. And Dell Technologies On Demand also includes value-added services with ProDeploy, ProSupport and Managed Services, which can be paired with all financial consumption models offered. By working with a technology provider who can offer you a seamless experience from end-to-end, you too can meet your organization’s most critical workloads where they need to be – on demand – and with all your public, private and edge clouds within reach.

And of course – any organization who partners with Dell Technologies benefits from even more points of seamless integration across the digital landscape thanks to our extended Dell Technologies family. For example, PowerEdge servers support the latest VMware stack, including VMware vSphere® 7 with Kubernetes®, VMware vSAN™ 7, and VMware Cloud Foundation™. You may have heard us say that our story began with two technology companies and one shared vision: to provide greater access to technology for people around the world. That mission continues forward proudly to this day.

Dell Technologies can provide you with the freedom of choice that you deserve, without making you sacrifice the control that your team needs and deserves. With the right technology in the right place, Dell Technologies and our powered-up portfolio will empower your IT organization with choice and control as your team as you build, run and manage your hybrid cloud.



Source: dellemc.com

Saturday, 22 August 2020

Dell Technologies Helps Keep Businesses and Innovation Engines Running

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Exam Prep

Over the past few months, we’ve seen great resolve from our customers and partners as we all work together to overcome a new set of challenges every day. I’m in awe of our customers and partners around the world as they continue to innovate while adapting to the ever-changing landscape. It’s our honor to support your journey by delivering the technology needed for business-critical operations while helping you preserve capital and manage cash flow.

In April, we launched the Payment Flexibility Program (PFP) and announced $9 billion in financing to help fund your critical technology needs. Today, we’re announcing an extension of the Payment Flexibility Program through October 30, 2020, with payment deferrals until 2021.

With Dell Technologies On Demand, our customers gain access to a broad range of financial consumption models to keep their technology infrastructures running while continuing to innovate. The Payment Flexibility Program is unmatched in its value to organizations in every aspect. From payment deferrals to partner relief programs to low rate financing offers, we are here to help and best prepared to assist customers with flexible IT solutions. These programs give you access to industry-leading technology, resources to innovate and cash flow for business continuity.

Our partner Shane Michna from SHI Capital said, “All of the various promotional options have been highly leveraged by large and small clients alike. Many of our clients have needed the ability to acquire technology and defer payment to 2021. The six-month deferral and zero percent promotional offer on Dell Technologies storage and servers has been the most utilized and sought after by SHI sales clients.”

Introducing new updates to the Technology Rotation payment solution


Dell EMC Study Materials, Dell EMC Certification, Dell EMC Exam Prep
We’ve seen that many of our customers want to acquire technology by paying over multiple years utilizing low cost financing. To meet your organization’s core infrastructure and remote workforce technology needs, we’re offering the lowest rate and total cost of ownership (TCO) ever for PowerStore storage arrays, PowerEdge servers, and Dell laptops and desktops as part of the Technology Rotation payment solution. At the end of the term, customers have the flexibility to return and upgrade their equipment with the latest technology to support hybrid cloud, on-premises workloads, or remote workforce initiatives.

This program helps customers better control IT costs by:

◉ Saving significantly compared to the full purchase price on laptops, desktops, storage and server solutions
Conserving cash by deferring their first payment until 2021

◉ Extending payments on servers and storage using up to two fixed 12-month extensions for flexibility to return or acquire hardware over five years

◉ Expanding or enhancing PowerStore performance and capacity at any time in the contract through Anytime Upgrades

These new options are available in addition to the current features of the Payment Flexibility Program which include:

◉ Zero percent interest rates and deferred payments for Dell Technologies infrastructure solutions
◉ Short term options for remote work and learning with six- to 12-month terms and refresh options for laptops and desktops to help with back-to-school and return-to-work plans.
◉ A one-year term flexible consumption offering to better align payments to an end user’s technology usage
◉ Credit availability to our valued channel partners through our Working Capital Solutions Program extending payments terms up to 90 days, and when combined with a Dell Financial Services** (DFS) payment solution, the partner is paid within a few days, improving their cash flow

Dell Technologies On Demand is helping customers prepare for tomorrow, today.  


Customers are finding immense value in creative technology financing solutions and flexible consumption models. IDC recently interviewed organizations using Dell Technologies On Demand and found significant cost savings when using on-demand consumption models. Customers cited the ability to acquire the newest technology at a much lower cost as one major advantage, stating, “We can essentially get the newest technology for a fraction of what we would pay for it outright.”

Dell Technologies On Demand helps enterprises align their IT spending with business demands and deliver more predictable outcomes. Customer demand for these flexible consumption-based and as-a-service models prompted us to expand our offerings to include Brazil, Chile, Colombia, India, and China.

Dell Technologies is here for the long haul!


The Payment Flexibility Program is the most comprehensive infrastructure financing program in the industry, made even stronger by Dell Technologies On Demand flexible consumption models. This is our commitment to help you run your business, take care of your people and access essential technology as you seek respite in the storm.

Source: dellemc.com

Thursday, 20 August 2020

Automate Productivity Using OpenManage Integrations for Microsoft

Data centers require consistent maintenance to ensure the bios, firmware, and drivers remain secure and run efficiently. Without consoles like OpenManage Enterprise, this maintenance requires IT professionals to spend significant time securing vulnerabilities, while other issues emerge throughout the data center. Additionally, this becomes more complex as you create clusters and move towards hyper-converged infrastructure (HCI). You may also need console specific features such as context awareness, cluster life cycle management, and day-N operations.

Dell EMC OpenManage Integration with Microsoft System Center (OMIMSSC) and Windows Admin Center (OMIMWAC) are launching new features to reduce the time spent updating your PowerEdge for Azure Stack HCI. Microsoft Windows Admin Center is the one-to-one console (like iDRAC) and System Center is the one-to-many console (like OpenManage Enterprise) for PowerEdge Servers and Dell EMC Solutions for Azure Stack HCI. With these integrations, you will be able to automate updates across servers and clusters in a hybrid-cloud environment.

No unnecessary interruptions


Many of our customers are already familiar with the features in OpenManage Enterprise that allow you to set routines to update the bios and firmware of your system. Now, OpenManage integrations with Windows Admin Center and Microsoft System Center simplify and unify these features. The Azure Stack HCI integrations with Microsoft System Center coordinates Cluster Aware Updates (CAU) to bios and firmware without impacting workloads in your datacenter. Windows Admin Center coordinates bios, firmware, and driver updates for Azure Stack HCI Clusters and Hyper-V clusters.

With failover support using Hyper-V cluster, these integrations direct the migration of your workloads to ensure your impacted applications never go out. It does this by autonomously moving workloads to online servers while putting redundant servers in maintenance mode for updates. This eliminates the complexity of scheduling downtime when coordinating updates and patch management. With these tools, you will be able to enhance and simplify life cycle management without additional agents or software.

Customize to your HCI fingerprint


As your data center matures, the systems, applications, and workloads diverge into complex and customized environments. To effectively operate a large data center, management becomes essential. We made sure OpenManage integrations with Azure Stack HCI allows you to operate across the hybrid cloud.

First, the solution catalog from Azure Stack HCI is fully supported by Dell Technologies. You can select which updates and releases are right for your environment. Second, OMIMSWAC and OMIMSSC support both disconnected and connected edge scenarios with Dell EMC online catalogs. You will never be stranded without catalog options because of the onboard Dell EMC Repository Manager and the Dell online catalogs. The differences between base and premium versions and whether you are running a standard PowerEdge or the Dell EMC Azure Stack HCI are highlighted below.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Certification

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Certification

How do I get it?


We tried to make it as easy as possible to get the right features and integrations for your data center environment. These OpenManage integrations with Microsoft are available at the point of sale or any time thereafter. If you are looking to try or all already use the ecosystem, we made the base version for Windows Admin Center free and enabled a trial for the System Center console. After that, it is always easy to upgrade to the premium versions, which costs a standard licensing fee.

Tuesday, 18 August 2020

Taboola Makes AI Real

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Exam Prep, Dell EMC Learning

Personalized recommendations have changed the way brands reach their customers effectively. Taboola is the world’s largest discovery platform, delivering content recommendations to billions of consumers on many of the world’s top sites. We recently sat down with Ariel Pisetzky, Vice President of IT and Cyber, to learn how Taboola uses AI to successfully drive their business. Taboola provides the right recommendation 30 billion times daily across four billion web pages, processing up to 150,000 requests per second.

A few years ago, Mr. Pisetzky and his team required a modernized infrastructure to support Taboola’s growth and improve the experience of their customers and advertisers.

Delivering Taboola’s services requires extraordinary computing power and simplified management to attain the maximum performance to serve clients and users worldwide. The company turned to AI, because it would allow them to dynamically respond to inquiries using inferencing and deep learning capabilities. Success depended on being able to keep insights flowing with adaptable AI systems, innovative architecture and intuitive systems management.

The engine driving their AI solution consists of two components: front-end artificial intelligence (AI) for inferencing based on PowerEdge modular servers with Intel® Xeon® Scalable processors to process and deliver the real-time content recommendations. The back-end servers that host cutting-edge deep learning models are continually trained using sophisticated neural networks to infer user preferences.

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Exam Prep, Dell EMC Learning
By using PowerEdge modular servers, the IT team at Taboola can meet rapidly changing demands and enjoy the versatility and simplicity necessary to support a building block approach. The team is able to cost-effectively use the same servers interchangeably as AI inferencing nodes, database servers or storage nodes with very simple configuration changes. Each request coming into a front-end data center runs the AI-driven inferencing algorithms in a unique, ultra-fast process that delivers a relevant recommendation within 50 milliseconds.

Taboola took full advantage of the built-in performance acceleration of 2nd Gen Intel Xeon Scalable processors—together with the highly optimized Intel Math Kernel Library for Deep Neural Networking (Intel MKL-DNN). Taboola was able to initially enhance its performance by a factor of 2.5x or more with their modernized infrastructure. Then, gaining the efficiencies of Kubernetes within the software layer—including the operating system, TCP/IP stack, load balancing and more— Mr. Pisetzky’s team went much further.

“With PowerEdge servers and Intel Xeon Scalable processors, we now get up to six times the performance on our AI-based inferencing compared to when we started,” states Pisetzky. “This helps reduce our costs, and we believe there’s a lot more to be gained over time.”

For the back-end data centers running deep learning-based models to accurately and reliably train the Taboola models, the Dell EMC PowerEdge R740xd servers with their lightning-fast accelerators were the answer.

“Training is much different from the real-time inferencing we do on the front end. The demands aren’t in terms of response times, but rather the time it takes to process large volumes of data. PowerEdge R740xd servers provide the performance to access our massive data to train our models and push them back to our front-end data center for inferencing. We’re using Vertica, Cassandra and MySQL databases across a variety of nodes,” states Mr. Pisetsky.

Today, the company takes a more holistic view of its data centers as high-performance computing (HPC) clusters, which are able to process an enormous number of requests per second. Rather than just add servers or racks, Taboola looks at everything as a single HPC machine, and reshuffles servers to achieve significant performance improvements and greater cost efficiencies.

The next step in building Taboola’s solution was determining the most efficient and cost effective way to manage this large global footprint with a small IT team of 12 Site Reliability Engineers across nine global data centers. The team turned to iDRAC, which allows them to deploy servers with the touch of a button. They can easily update servers across their data centers and ensure the BIOS and firmware settings are identical across all servers.

The results Taboola has delivered to their users are amazing. Today, different people can go to the same page and receive personalized recommendations relevant to them, all without Taboola knowing who you are. AI has provided Taboola with the ability to take their business to the next level with impressive results. They can now provide personalized services, better user experiences and better results for their end users, advertisers and publishers.

Saturday, 15 August 2020

Reducing Wait Times With a New I/O Bottleneck Buster

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Exam Prep, Dell EMC Guides

The Data Accelerator from Dell Technologies breaks through I/O bottlenecks that impede the performance of HPC workloads


In high performance computing, big advances in system architectures are seldom made by a single company working in isolation. To raise the system performance bar to a higher level, it typically takes a collaborative effort among technology companies, system builders and system users. And that’s what it took to develop the Data Accelerator (DAC) from Dell Technologies.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Exam Prep, Dell EMC Guides
This unique solution to a long-running I/O challenge was developed in a collaborative effort that drew on the expertise of HPC specialists from Dell Technologies, Intel, the University of Cambridge and StackHPC. The resulting solution, DAC, enables the next generation of data‑intensive workflows in HPC systems with an NVMe‑based storage solution that removes storage bottlenecks that slow system performance.

How so? DAC is designed to make optimal use of modern server NVMe fabric technologies to mitigate I/O‑related performance issues. To accelerate system performance, DAC proactively copies data from a cluster’s disk storage subsystem and pre-stages it on fast NVMe storage devices that can feed data to the application at a rate required for top performance. Even better, this unique architecture allows HPC administrators to leave data on cost-effective disk storage until it is required by an application, at which point the data is cached on the DAC nodes.

Plunge Frozen


Cryogenic Electron Microscopy (Cryo-EM) with Relion is one of the key applications for analyzing and processing these large data sets. Greater resolution brings challenges — as the volume of data ingest from such instruments increases dramatically, and the compute requirements for processing and analyzing this data explode.

The Relion refinement pipeline is an iterative process that performs multiple iterations over the same data to find the best structure. As the total volume of data can be tens of terabytes in size, this is beyond the memory capacity of almost all current-generation computers and thus, the data must be repeatedly read from the file system. The bottleneck in application performance moves to the I/O.

A recent challenging test case produced by Cambridge research staff has a size of 20TB. The I/O time for this test case on the Cumulus traditional Lustre file system versus the new NVMe DAC reduces I/O wait times from over an hour to just a couple of minutes.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Exam Prep, Dell EMC Guides
The Data Accelerator in the Cumulus supercomputer incorporates components from Dell Technologies, Intel and Cambridge University, along with an innovative orchestrator built by the University of Cambridge and StackHPC.

With its innovative features, DAC delivers one of the world’s fastest open‑source NVMe storage solutions. In fact, with the initial implementation of DAC, the Cumulus supercomputer at the University of Cambridge reached No. 1 in the June 2019 I/O-500 list. That means it debuted as the world’s fastest HPC storage system, nearly doubling the performance of the second‑place entry.

And here’s where this story gets even better. Today, Dell Technologies is sharing the goodness of DAC by making the solution available to the broad community of HPC users via an engineering-validated system configuration covering DAC server nodes, memory, networking, PCIe storage and NVMe storage.

Friday, 14 August 2020

Is There Any Scope of Doing Dell EMC Advanced Analytics Specialist Certification?

Dell EMC Advanced Analytics Specialist Certification Overview

Dell EMC Advanced Analytics Specialist Certification, Advanced Analytics Specialist, Dell EMC Advanced Analytics Specialist Exam, Dell EMC Advanced Analytics Specialist (E20-065) Exam, Dell EMC E20-065
The Dell EMC Advanced Analytics Specialist is a certification that exposes to the big data and data analytics. Topics for this certification cover an introduction to data analytics, characteristics of big data, and data scientists' role. It also held a variety of big data theories and techniques, including linear regression, time-series analysis, and decision trees.

Various tracks are within the Proven Professional program, allowing certificate holders to achieve a depth of knowledge in particular areas of Dell EMC products and storage technologies. Proven Professional courses include data scientist, cloud architect, cloud and storage administrator, technology architect, engineer, and application developer. Dell EMC advances product-specific certifications for customers who employ ScaleIO, ViPR, Data Protection Advisor, and VxRail storage solutions.

Most, but not all, Proven Professional tracks come in three levels: Associate, Specialist, and Expert. Each certification track in the Proven Professional program points to combine target audiences – employees, partners, customers, and industry-wide.

The next level up on the certification ladder is the Dell EMC Data Scientist, Advanced Analytics Specialist. Professionals at this level use advanced analytics to solve business issues. They use Hadoop, Pig, Hive, and HBase; analyze social networks, and understand natural language processing.

Benefits of Dell EMC Advanced Analytics Specialist Certification

Data exploration and data discovery to recognize and quantify the characteristic of the data. This is an interactive method to learn which variables and metrics to test in the interactive analytic model development process.

Data improvement is the method of creating new higher-order variables that improve the raw data's content and meaning, given the difficulty being addressed. Data advancement methods cover log changes, recency, frequency, monetary, calculations, indices, share, attributions, and scores.

Data visualization uses tools and methods to recognize patterns, trends, outliers, and correlations that might be important in the analytic modeling process; to know variables and metrics that might be better predictors of business and operational performance.

Feature engineering, creating new input features for machine learning, is one of the most effective methods to develop predictive models.
Tips To Crack Dell EMC E20-065 Exam In First Attempt

The Career Prospects After Passing Dell EMC Advanced Analytics Specialist Exam

Today, almost all companies use IT deployment services. They need IT professionals, data scientists, specialists, implementation engineers, and technology architects. The HR manager or enterprises give preference for certified implementation engineers, scientists, and specialists.

After passing this Dell EMC Advanced Analytics Specialist (E20-065) exam, you will be in a better position to get a job. You can increase your credibility and confidence. To distinguish yourself in the market, you have to get this credential with good marks. You can start and advance your career in the following job roles.

Data Scientist Requirements

Each industry has its big data profile for a data scientist to investigate. Here are some of the more popular forms of big data in each sector, and the sets of analysis a data scientist will likely be required to perform.
  • Business: Today, data shape the business strategy for nearly every company, but companies want data scientists to make sense of the information. Data analysis of business data can tell decisions around performance, inventory, production errors, customer loyalty, and more.
  • E-commerce: Now that websites get more than purchase data, data scientists help e-commerce businesses increase customer service, find trends, and promote services or products.
  • Finance: In the finance industry, data on accounts, credit and debit transactions, and related financial data are essential to a functioning business. But for data scientists in this field, security, and agreement, including fraud detection, are also significant concerns.
  • Government: Big data aids governments form decisions, support constituents, and monitor overall satisfaction. Like the finance sector, security and agreement are paramount concerns for data scientists.
  • Science: Scientists have always managed data, but now with technology, they can better collect, share, and analyze data from experiments. Data scientists can help with this process.
  • Social Networking: Social networking data help inform targeted advertising, improve customer satisfaction, establish trends in location data, and improve features and services. Ongoing data analysis of posts, tweets, blogs, and other social media can aid businesses in improving their services continuously.
  • Healthcare: Electronic medical records are now the standard for healthcare facilities, which need dedication to big data, security, and compliance. Here, data scientists can aid promote health services and uncover trends that might go unseen otherwise.
  • Telecommunications: All sciences collect data, and all that data requires to be stored, managed, maintained, and analyzed. Data scientists help the company's squash bugs, improve products, and keep customers happy by delivering the features they want.
  • Other: No industry is immune to the big data push, and you will find jobs in different niche areas, like politics, utilities, smart appliances, and more.

Keys Takeaway

From my perspective, the Dell EMC Advanced Analytics Specialist is useful for someone who has knowledge about data science or has trained a couple of predictive analytics models.

Through the study, we learn the process of training the model, building a pipeline for the experiment, and deploying the model to production in an efficient way. In my opinion, it is not only suitable for the Dell EMC Advanced Analytics Specialist but also helps you have the idea of real-world processes even on other platforms.

It is no secret that data scientists can bring an immense amount of value to the table. Finding one person who can do all the tasks expected of a data scientist is challenging, and controversy over hiring these professionals is hard.

Thursday, 13 August 2020

When “Good Enough” Isn’t Good Enough

As we’ve stated in previous blogs, cyber recovery is arguably the most critical capability any IT decision-maker must evaluate when looking to modernize and transform their data protection to address today’s threats. It has become table stakes for data protection vendors to offer some cyber recovery features within their products, but not all cyber recovery protection is created equally.

Let’s be crystal clear about this. Settling for “Good Enough” is not an acceptable approach when it comes to protecting your company’s most critical data from a cyberattack. Would you settle for “Good Enough” safety in your automobile? Would you settle for “Good Enough” homeowner’s insurance to protect your house or family against a personal loss? If the answer is no, then when it comes to the data and applications that keep your business running and alive, why is “Good Enough” acceptable?

Some vendors claim that certain features or strategies are good enough in the face of a cyberattack, but they won’t be the ones left to answer the hard questions after an attack that leaves you unable to recover critical data. That will be you, your CEO, CFO, and CISO. That’s why if I were evaluating cyber recovery solutions, I would focus on the three cyber recovery pillars below to see which vendors truly help protect my company’s most critical asset.

Retention Lock in Production


Retention lock prevents specified files from being overwritten, modified, or deleted for a user-defined retention period and is an excellent first step for companies looking to improve their cyber resilience. Most vendors, including Dell Technologies, offer this hardening feature; but we take this protection even further. The Dell EMC PowerProtect DD retention lock feature, which has been attested to comply with the SEC 17a-4(f) standard, is a standard feature that comes in two flavors: Governance and Compliance mode. With Governance mode, data is retained for a specific time period, but can still be overridden or modified by an administrator with account credentials – this is valuable in certain use cases such as legal hold. Compliance mode, on the other hand, is stricter, and not even an administrator with (advanced) credentials can edit or delete data during the retention period. PowerProtect DD also includes Compliance Mode, for data protected within the cyber recovery vault, as a standard feature; there is no extra cost or performance penalty for being better protected!

At Dell Technologies, we’re advocates of Retention Lock, and that’s why we offer two modes. It’s a helpful first step in data hardening, but it’s still only one step of your cyber recovery strategy.

“Off-network” Air Gap Isolation


Cybercriminals’ techniques are continually evolving and becoming more advanced. In most cases, they will penetrate networks long before they launch their attack. Once inside the corporate network, they ensure that when they do strike you won’t be able to recover. They do this by disabling backups, changing NTP clocks, encrypting CIFS and NFS backup shares, and so on. This is why it is so necessary to have an off-network air gapped copy of your mission-critical data, ensuring you have a protected copy available in the event of an attack.

If you search for air gap solutions online, you will see most vendors in the market claim to offer some sort of a solution, but the devil is in the details. Everyone has a different definition of an air gap, including simply sending data offsite with tape. While it is correct that sending a tape off site provides an air gap copy, it comes with multiple tradeoffs. Minutes count in the event of a ransomware attack, and the time spent retrieving tapes from an offsite facility and then restoring your entire backup environment from tape will be costly. Another risk is that the backup catalog and tape media catalog may be compromised as part of the attack, rendering the offline tapes useless for recovery or needing to be re-indexed, which adds significant recovery time.

Moreover, depending on how old the tape is you need to recover, will you even be able to restore it? We all know tape degrades over time. Why would you want to risk putting your company’s most critical data on media that you know is susceptible to failure?

Recently, some vendors have even been positioning the idea of sending immutable copies, or data that is unable to be changed, to a public cloud as an air gap cyber recovery solution. The data sent to the cloud might be immutable, but your cloud account certainly isn’t. All it takes is an administrator with the right credentials (which a cyberattacker is likely to have since they already compromised the network) to delete your cloud account, not necessarily the files or content contained within that account, and that air gap copy in the cloud is gone.

Dell EMC PowerProtect Cyber Recovery, on the other hand, provides an automated off-network air gap to provide complete network isolation. PowerProtect Cyber Recovery moves critical data away from both the production and backup environments attack surface, physically isolating it within a protected part of the data center (or offsite) and requires separate security credentials for access. This isolated environment, separated by the air gap, is what we call the PowerProtect Cyber Recovery vault, which is the centerpiece of our solution. The PowerProtect Cyber Recovery vault provides multiple layers of protection to provide resilience against cyber-attacks, even from an insider threat. PowerProtect Cyber Recovery automates the synchronization of data between the primary backup system and the vault, creating immutable copies with locked retention policies. If a cyberattack occurs, you can quickly identify a clean copy of your data and recover your critical systems to get your business back up and running. We can also support third party software paired with PowerProtect DD, which provides customers and partners flexibility and choice.

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Learning, Dell EMC Guides

PowerProtect Cyber Recovery Analytics


While many vendors in the market provide integrated analytics within their data protection solution, it’s important to understand what those analytics offer. As I have previously stated, most vendors only take a high-level view of the data and use analytics that looks for obvious signs of corruption based on metadata. Metadata-level corruption is not difficult to detect, and if a solution leverages this kind of analytics only, it will miss changes within the file itself that often indicate a compromise. Some vendors will also use a multi-pass approach that uses on-prem metadata analytics on the first pass, and then sends suspicious data to the cloud for a second pass of full content analytics. This approach, however, still has multiple challenges, including the delayed discovery of potential threats which forces the customer to send business-critical data offsite to a cloud provider, which is inherently less secure than performing these operations within the security of an on-premises vault environment.

PowerProtect Cyber Recovery not only provides full-content analytics but also operates inside the vault, where an attacker cannot compromise them. Running analytics on the data in the vault is a critical component to enable quick recovery of “known good data” after an attack. Our analytics are particularly powerful because they can read through the backup format. Hence, there is no need to restore data and PowerProtect Cyber Recovery can evaluate the full contents of the file, not just its metadata. To truly understand how powerful our analytics is, it’s essential to know how it works.

Data is first scanned in the format it was stored in the vault, typically this is a backup file format. Analytics then conduct over 100 observations per file. These observations are collected and evaluated by a machine learning tool that has identified patterns indicating data has been corrupted. Since we are looking for patterns and not signatures, the analysis is more effective and does not need to be updated as frequently. This process is repeated each time a new data set is brought into the vault. Data can be compared daily to provide a complete picture of changes that might be occurring very slowly and that other tools would likely miss.

In our opinion, along with a growing list of happily protected customers, PowerProtect Cyber Recovery provides the “best” protection against cyber attacks vs. many vendors who offer their “Good Enough” solutions.

Understanding these three cyber recovery pillars will help you make an informed cyber recovery decision that meets your company’s needs when you’re comparing solutions from different vendors. “Good Enough” might be acceptable when it comes to shopping at the grocery store, but not when it comes to deciding on cyber protection that impacts your company’s most valuable asset – data.

Wednesday, 12 August 2020

Modern VDI for the Hybrid Cloud with Dell EMC and VMware Horizon 8

Dell EMC Study Materials, Dell EMC Exam Prep, Cloud, Dell Technologies, VMware, News, End User/Workforce

The way we work is rapidly changing, and organizations around the world have responded quickly to the challenge. In order to move fast to meet end user access requirements and demands, organizations have taken advantage of desktop and application virtualization, deploying on-premises and in the cloud. Businesses want to take full advantage of a hybrid cloud Virtual Desktop Infrastructure (VDI) solution for the rich features and manageability of on-premises deployment and the immediate availability and scale with public cloud. However, the hybrid cloud VDI architecture needs careful consideration – such as the locality and security of desktops, apps, and data are relative to end users and whether their current on-premises environment supports easy VDI workflow migration to public clouds. In addition, deploying and managing desktops and apps in multiple clouds and locations can produce management overhead.

What if you could deploy desktops and apps on both private and public cloud, to provide the best experience for your end user at the best cost profile, with one management system? Tolday, we are pleased to announce the availability of Horizon 8 with the Dell Technologies Ready Solutions for VDI.

Top 3 Challenges of Hybrid Cloud VDI Deployment:


1. Inconsistent deployment of VDI – Organizations with an on-premises VDI solution need to quickly scale capacity to the public cloud when needed. The reverse is also true: organizations with a cloud VDI deployment may need to move some workloads back on-premises. This requires different SLAs, different images, different cost structure, different location of virtual desktops and apps and potentially a different user experience. Scaling and moving VDI environments across different deployment models can be slow and difficult to execute.

2. Inconsistent end-user experience – With VDI environments being deployed across clouds, employees often experience inconsistent load time, high latency, and poor application performance with their virtual desktops. This drastically reduces employee productivity and job satisfaction, negatively impacting business outcomes.

3. Complex Management – Deploying desktops and apps on-premises and in the cloud often comes with multiple management platforms. It can be hard to gain real-time visibility to resolve issues quickly, can significantly increase security exposure, and a multitude of expertise is required for each management system.

A Simple Solution – VDI Made for Hybrid Cloud:


To help organizations take full advantage of a hybrid cloud VDI solution while avoiding any downsides, VMware launched Horizon 8 with new and improved features that help quickly and flexibly scale your VDI environment across clouds. Together, Dell EMC and VMware create a one-stop, end-to-end solution that’s a simple path to a hybrid cloud VDI strategy.

◉ Scale to any cloud with flexible VDI deployment models – Customers can freely choose where to deploy their VDI environments with Dell EMC and VMware solutions. For customers who want to refresh their on-premises VDI environment or bring some workloads back to on-premises from the cloud, we encourage them to rethink their data center structure and consider VMware Horizon on Dell Technologies Cloud Platform (DTCP). VMware Horizon on DTCP is built on Dell EMC VxRail hyper-converged infrastructure (HCI) with VMWare Cloud Foundation (VCF), delivering simple operations through automation across the VDI and app stack. Customers can deploy and configure virtual desktops at lightning speed through native integrations across Dell VxRail, VMware SDDC, and Horizon virtual desktops and apps, with built-in security from the hardware layer to the end-user virtual desktops.

For customers who plan to scale their VDI environment from the on-premises data center to the cloud, Horizon on DTCP provides an easy and fast way to extend the capability to public clouds. Horizon 8 expanded the number of new cloud platforms support for native VMware stack, including support for Horizon on VMware Cloud on Dell EMC. These extensive cloud deployment options truly enable a hybrid model so IT can take advantage of cloud resources where they need to.

◉ Enable a delightful end-user experience – A new universal brokering solution available with Horizon 8 helps federate brokering across multiple clouds. This cloud brokering service automatically connects end-users to their personal desktop in any pod on-premises or in the public cloud, globally, resulting in low latency and high-quality connection regardless of the user location.

Instant Clone Smart Provisioning technology in Horizon 8 can rapidly provision full-featured, personalized virtual desktops and apps, instantly delivering a persistent desktop that satisfies the user’s specific needs. There are also improvements made to optimize audio and video support for Microsoft Teams, Zoom, and other communication and collaboration tools, providing a better user experience to improve productivity. ​

◉ Manage silos with a single console – With Horizon 8, IT can take advantage of a single control plane to efficiently deploy, manage, and scale desktops and apps, across private and public clouds, and across Horizon pods. For example, IT can utilize a common set of tools and a unified architecture to seamlessly and smoothly migrate on-premises VDI workloads to VMware environments in the public clouds. This allows IT to provision additional capacities such as launching hundreds of virtual desktops in a short time or enabling disaster recovery (DR) by having an on-premises active primary site and a secondary passive site in the public cloud.

New Restful APIs help automate rich capabilities available in Horizon, including monitoring, entitlements, and user and machine management. IT can easily interact with Horizon for added flexibility, distributing and accessing information, and modernizing services with speed.

Dell EMC and VMware provide a turnkey, end-to-end VDI solution that is easy to buy and quick to deploy to any cloud.

Source: dellemc.com

Tuesday, 11 August 2020

Simplify Hybrid Cloud with Dell EMC PowerEdge + VMware

With the explosive growth of data fueling today’s global environment, businesses’ ability to readily adapt is becoming more and more critical for survival, and IT managers are under enormous pressure to deliver applications and services that innovate and transform the business.

Regardless of  the size of an organization, cloud computing is a fundamental business enabler that powers everything from email and collaboration tools to mission-critical systems. As cloud technology, and the way businesses use it, has evolved, hybrid cloud models have become a dominant way to balance cloud-dependent functionality with considerations that divide workloads between on-prem computing, enterprise data centers, and public and private clouds.

Hybrid clouds have become the norm for many organizations globally, with 92% of organizations having both public and private cloud environments installed, and 90% of modern server environment operators reporting  increased value and effectiveness from hybrid cloud initiatives versus just 62% of those running legacy server shops. Similar to the real-world nature of clouds, which come in various types with different environments, hybrid clouds are also dynamic. Sometimes they must be built quickly and flexibly, sometimes they hang around for a long time, and other times they dissipate quickly. Often the data doesn’t just stay on one of these many clouds, and only 50% of all applications are expected to “stay in place” over the next year.

Recently VMware introduced VMware Cloud Foundation 4. VMware Cloud Foundation (VCF) is the future-proof hybrid cloud platform for modernizing data centers and deploying modern apps.  VCF is based on VMware’s proven technologies including VMware vSphere with Tanzu (Kubernetes), vSAN, NSX, and vRealize Suite, providing a complete set of software-defined services for compute, storage, networking, security, and cloud management to run enterprise apps – traditional or containerized – across hybrid clouds.

Dell EMC Study Materials, Dell EMC Certification, Dell EMC Exam Prep

Cloud Foundation delivers enterprise agility, reliability, and efficiency for customers seeking to deploy private and hybrid clouds. Cloud Foundation simplifies the hybrid cloud by delivering a single integrated solution that is easy to deploy and operate through built-in automated lifecycle management.

Cloud Foundation 4 provides consistent infrastructure and operations from the data center to the cloud and the edge, making VCF an ideal platform for hybrid cloud deployments. It serves as an integrated software platform that automates a complete software-defined data center (SDDC) on a standardized architecture, such as Dell EMC PowerEdge and Dell EMC vSAN Ready Nodes, which are preconfigured, tested, and jointly certified to run vSAN and take the guesswork out of deploying your environment.

With Cloud Foundation 4, VMware has enhanced the various software stack components to boost manageability, security and development of modern apps. Such new features in the stack include: enhanced support for modern apps with vSphere with Tanzu, vSphere Lifecycle Manager (vLCM), upgrades in cloud-scale networking and security with NSX, and updated vRealize Suite.

The Kubernetes addition, now intrinsic to vSphere 7 with Tanzu, enables VCF Services and application‑focused management for streamlined development, agile operations, and accelerated innovation. With native support for Kubernetes, customers can build, run and manage containers and virtual machines in one environment.

vSphere Lifecycle Manager, or vLCM, reduces operational complexity, for day zero- day 2 tasks and reduces the amount of time needed to monitor the environment. It allows 98% less hand-on time and 98% fewer steps when updating an 8-node cluster using vLCM and OMIVV.¹ Before the introduction of vSphere 7 and vLCM, admins often used up to nine tools to manage their VMware environment.

Similarly, the VMware vRealize Suite, a multi‑cloud cloud management solution, provides a modern platform for infrastructure automation and consistent operations. vRealize is included with most editions of VCF and ultimately helps with application operations, allowing developers to quickly release, troubleshoot and optimize performance of highly distributed microservice‑based cloud applications.

NSX 3 enables cloud-scale networking and security by eliminating infrastructure boundaries across clouds. It delivers true single pane of glass capabilities while providing enhanced constructs for data path multi-tenancy and service chaining. With NSX, customers gain federation capabilities to provide a cloud-like operating model for network operators, enhanced multi-tenancy capabilities and service chaining. Also, NSX distributed IDS/IPS, network intelligence enhancements, security anomaly detection, L7 Edge Firewall enhancements and application templates for micro-segmentation all come together to build additional intrinsic security capabilities for Cloud Foundation. IT departments must have the right x86 server infrastructure plus virtualization software to support traditional and modern apps, with the utmost security and simple manageability. For these needs, more and more customers globally have standardized on Dell EMC PowerEdge servers plus VMware software. Flexibly architect and scale your hybrid cloud environment to transform what’s possible for your business, while empowering IT with intelligent automation, consistent management, and multi-layer security across physical and virtual machines and containers, from the core to the cloud to the edge.

Source: dellemc.com

Saturday, 8 August 2020

Practical Steps to Developing an AI Strategy

What are we in the business of doing? This is the first question organizations should answer when building an AI strategy.  Anyone so charged needs clarity on why their company exists in the first place. The company’s mission statement will provide the answer to ‘Why’ the company exists. Once the ‘Why‘ is understood, assess the ‘How‘: how does the company execute on its purpose?

For example, at Dell Technologies, our purpose (Why) is to drive human progress, through (How) access to technology, for people with big ideas around the world.

With the organization’s mission statement in mind, instances where the company’s operations and outcomes fall short of the mission should be identified. A senior executive leader should have broader insight on where those instances exist and could recruit business leads to scope out pain points as well. For example, a food processing company with a mission to provide superior products and services to customers, might fall short of this mission if the company has high reported numbers of degraded produce or excess waste from product defects. This may point to underlying problems in the manufacturing process. If defects could be predicted with the foresight to avoid excess waste, this would take the old saying that ‘Hindsight is 20/20’ and make it ‘Foresight is 20/20.’

All AI use cases are built on defined tasks that identify where AI can be used to improve company operations. In the case of the food processing company, the task could be to ‘predict, detect, identify or recognize,’ and the use case would be to predict or identify defected product early so that the company lives up to its mission. Some examples of AI tasks and the questions they answer are:

◉ Detection – Is something detectable/present?

◉ Recognition – Can something be identified?

◉ Classification – Does something belong to a certain class?

◉ Segmentation – If something exists, can it be carved out? How much of it exists?

◉ Anomaly detection – Is something out of line with expectations?

◉ Natural language processing – Can language and sentiment be understood?

◉ Recommendation – Can a solution be found for a desired outcome?

Using this ‘task plus use case’ formula creates a running list of use cases from which a company can downselect. If several use cases can be resolved using the same kind of task, it allows for rinse and repeat opportunities and quick AI adoption wins. To downselect, the following questions should be considered:

◉ What is the business value? Will it have strategic impact? Can impact be measured?


Waste from product defects will cost any company over time. Records of these costs serve as evidence of business value and strategic impact if money can be saved and reallocated to innovative projects.

◉ How feasible is the use case?


Feasibility can be measured using the availability of relevant data to understand the problem and train AI models. What data is available, where it’s stored and how it needs to be prepared for use in AI are important factors. Relevant data provides information about factors that contribute to the desired outcome. For product defects that could be indicators of faulty processing equipment.

As mentioned, looking for external answers too early could take away the uniqueness of internal ideation from the people who know the business the best. Identifying use cases gives reference point while conducting research on what ‘similar others’ are doing with AI. External research informs on the level of effort needed to execute on selected use cases, as some might be successfully deployed already, and some might not. Web searches such as ‘AI + *insert industry*’ or ‘predict product defect + manufacturing + AI’, will yield pertinent results. Other AI tasks might come up and be applicable to other use cases.

Understanding the direction of AI adoption sets the stage for building a team of active participants to garner company-wide consensus for adoption. The following individuals would be candidates for this team:

◉ Data Architect/Engineer- Designs data retrieval, processing and preparation for consumption.

◉ Data Scientist/Machine or Deep Learning Engineer – Uses traditional statistical methods and ML/DL techniques to make predictions and solve complex data science and business tasks.

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Prep, Dell EMC Certifications
Both sets of individuals understand what software/hardware tools are needed to be successful. Other key individuals include:

◉ Database Administrator – Handles data access and control, mainly works with traditional data, and knows data locations.

◉ Data/Business Intelligence (BI) Analyst – Performs analyses on historical data and runs reports that quantify costs and strategic impact.

◉ BI Developer – Uses tools like SQL and python to standardize analyses

◉ Business Leads who care about the business benefits of AI.

Dell finds that successful AI projects follow a pattern like Maslow’s hierarchy of needs to reaching one’s potential. This hierarchy starts with a use case and works up to an optimized IT environment to support it.

Thursday, 6 August 2020

Data Value at the Edge

5G, Dell Technologies, PowerEdge, Internet of Things, Modular Infrastructure, Servers, Data Analytics, Opinions

The Edge, while frequently discussed as something new, is in fact another technical turn of the crank. Fueled by an abundance of smart devices and IoT sensors, worldwide data creation has been growing exponentially, driving our customers and partners to innovate. For example, between 2016 and 2018, there was an 878% growth in healthcare and life science data resulting in over 8 petabytes of data managed by providers per annum. Dell Technologies has been at the forefront of this data revolution enabling our customers and partners to leverage these new sources of data to drive business. The process of data creation, transformation and consumption has taken on new meaning as devices have become more integrated in our everyday lives. How this data lifecycle adds value to our customers and partners is the subject of our post today.

Data Creation


“Data is fuel.” We’ve heard this spoken time and time again. While that’s true – it doesn’t convey the process the data undergoes, before it become something useful. “Data is fuel” is the net result of this process, not the genesis.

So, how do we get to this final, consumptive state with data? Data Creation is a constantly evolving mechanism driven by innovation, both in technology as well as in society. For example, the idea of remote patient monitoring has evolved, enabled by complementary technologies like 5G networks and IoT sensors. The ability for health care providers to securely retrieve data from smart watches, pacemakers, blood pressure cuffs, temperature sensors, electrocardiograms and insulin pumps (to name just a few) has driven a new paradigm of patient care and engagement. This wouldn’t have been possible a few decades in the past and, due to innovative approaches in networks, data management, and sensors, it represents one of many unique applications of the data creation process. Once this data is created, however, it must be transformed to be useful.

Data Transformation


Using the example of remote patient monitoring, the data generated by various sensors is unique. It has no intrinsic value as a “raw” data stream. Binary bits of encoded data provide no context, no perspective on what is happening with a patient. To fully understand, contextualize and derive useful consumptive value, it must be transformed. This transformation process extracts information, correlates and curates it through applications like artificial intelligence and analytics and provides it back in a human and machine-readable format. 1’s and 0’s become more than their sum and now, as transformed data, they’re ready to be consumed. Continuing with the patient monitoring example, the doctor receiving this information is then able to correlate and analyze these data feeds from a variety of sensors and sources and view them with an eye toward application. Recently, a Dell Technologies customer was able to increase their analyst-to-support staff ratio by greater than 100:1, enabling them leverage this data transformation to achieve better performance. As we’ve now seen, data is now one step closer to being fuel.

Data Consumption


By now, data has been created in various modalities, transformed by analytics and artificial intelligence and is ready to be consumed. Consuming data is more than just visualizing an output; it is the action. Our doctor has received remote patient data, securely viewed the correlated results and is now ready to provide diagnosis. The diagnosis is the net result of this generative (reproductive) model. Rather than being static or one-time-use, data consumption has taken on new meaning. Broadening this example, doctors use data to predict how to better counteract and treat disease. Machine learning models consume training data to learn to take future action and to create and transform the outputs into new capabilities. Manufacturers view vehicle data in extended reality (XR), peeling apart systems to be able to experience the real-time interactions between components. This generative cycle continues to evolve as technology advances, making the most of data’s kinetic energy.

Tuesday, 4 August 2020

Cloud Without Chaos

It’s difficult to imagine a world where you couldn’t order groceries, check your bank account, read the news, listen to music or watch your favorite show from the comfort of your smartphone. Perhaps even harder to fathom is that some of these services only hit the mainstream in the last 10-20 years. The world we live in today is a stark contrast to the world of 20 years ago. Banks no longer deliver value by holding gold bullion in vaults, but by providing fast, secure, frictionless online trading. Retailers no longer retain customers by having a store in every town, but by bringing superior customer service with extensive choice and a slick tailored user experience. Video rental shops are a thing of the past, replaced by addictive convenient media streaming services such as Netflix. The list goes on.

Delight, Engage, Anticipate, Respond


At the beating heart of this digital convenience is software. A successful software organization can delight customers with superior user experience, can engage its market to determine demand, and can anticipate change – such as regulatory. It also has the means to respond quickly to any risk: security threats, economic flux or competitive threat. In summary, companies are rediscovering their competitive advantage through software and data.

Dell EMC Exam Prep, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Study Materials

How to build good software? Contrary to popular belief, it’s more than just spinning up some microservices. Good software relies on several core pillars being present: abiding by lean management principles; harmonizing Dev and Ops to foster a DevOps culture; employing continuous delivery practices (such as fast iterations, small teams and version control); building software using modern architectures such as microservices; and last but not least, utilizing cloud operating models. Each year, the highly regarded State Of DevOps Report sees continued evidence that delivering software quickly, reliably and safely – based on the pillars mentioned above – contributes to organizational performance (profitability, productivity and customer satisfaction).

Per the title, this blog series intends to focus on the cloud pillar. In the context of software innovation, cloud not only provides the Enterprise with agility, elasticity and on-demand self-service but also – if done right — the potential for cost optimization. Cost optimization is paramount to unlocking continued investment in innovation, and when it comes to cloud design, there should be no doubt: architecture matters.

Application First


How should an organisation define its cloud strategy? Public cloud? Private cloud? Multi-cloud? I’d argue instead for an application first strategy. Applications are an organization’s lifeblood, and an application first strategy is a more practical approach that will direct applications to their most optimal destination. This will then naturally shape any cloud choice. An app first strategy looks to classify applications and map out their lifecycle, allowing organisations to place applications on optimal landing zones – across private, public and edge resources – based on the application’s business value, specific characteristics and any relevant organizational factors.

Ultimately seeking affirmation whether to invest in, tolerate or decommission an application, companies can use application classification methodologies to categorize applications. Such categorization determines where (if any) change needs to happen. Change can happen at these three layers:

◉ Application Layer
◉ Platform Layer
◉ Infrastructure Layer

The most substantial lift, but one with the potential for the most business value, is a change to application code itself, ranging from a complete re-write, to materially altering of code, to the optimization of existing code. For applications which don’t merit source code change, perhaps the value lies in evolving the platform layer. This re-platforming to a new runtime (from virtualized to containerized for example) can unlock efficiencies not possible on the incumbent platform. In the case of applications where the transformation of application code or platform layer may want to be avoided at all costs, modernization of the infrastructure layer could make the most sense, reducing risk, complexity, and TCO. Lastly, the decommissioning of applications at the end of their natural lifecycle is very much a critical piece of this jigsaw. After all, if nothing is ever decommissioned, no savings are made. This combination of re-platforming applications, modernizing infrastructure and decommissioning applications is crucial in freeing up investment for software innovation.

Landing Zones


Where an application ultimately lands depends on its own unique characteristics and any relevant organisational factors. Characteristics include the application’s performance profile, security needs, compliance requirements and any specific dependencies to other services. These diverse requirements across an Enterprise’s application estate, give rise to the concept of multi-cloud Landing Zones across Private, Public and Edge locations.

Dell EMC Exam Prep, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Study Materials

Cloud Chaos


Due to this need for landing zones, the industry has begun to standardize on a multi-cloud approach — and rightly so. Every application is different, and a multi-cloud model permits access to best-of-breed services across all clouds. Unfortunately, the multi-cloud approach does bring with it a myriad of challenges. For example, public clouds deliver native services in proprietary formats, often necessitating the need for costly and sometimes unnecessary re-platforming. The need to re-skill a workforce compounds these challenges, as do the complex financials created by a multi-cloud model due to inconsistent SLA constructs across different providers. Lack of workload portability is another critical concern due to the previously mentioned proprietary format. This is further exacerbated by proprietary security and networking stacks, often resulting in lock-in and increased costs.

Dell EMC Exam Prep, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Study Materials

Cloud Without Chaos


Dell Technologies Cloud is not a single public cloud, rather a hybrid cloud framework which delivers consistent infrastructure and consistent operations, regardless of location. It is unique in the industry with its capability of running both VMware VMs and next-generation container-based applications consistently, irrespective of whether the location is private, public or edge. This consistent experience is central to enabling workload mobility, which itself is key to flexibility, agility and avoidance of lock-in.

Through combined Dell Technologies and VMware innovation, core services such as hypervisor, developer stacks, data protection, networking and security, are consistent across private, public and edge locations. Dell Technologies Cloud reduces the need for complicated and costly re-platforming activities associated with migration to a new cloud provider’s native proprietary services. Nonetheless, for organisations wishing to leverage native public cloud services, they can still do so while also benefiting from proximity to VMware-related public cloud services.

Consistent operations also reduce the strain on precious talent, by allowing companies to capitalize on existing skillsets. Organizations can consistently manage applications –  regardless of location –  and avoid the costly financial implications of re-skilling staff each time they choose a new cloud provider.

Looking at this from the lens of the developer and with modern applications in mind, thankfully, container standards span the industry, which mitigates the need for wholesale container format changes between clouds. Despite this, each cloud provider has an opinion on the ecosystem (container networking, container security, logging, service mesh, etc.) around containers in their native offerings, such as CaaS and PaaS. This bias can precipitate the need for tweaks and edits each time an application is to be moved to another cloud, effectively burning developer cycles. Instead, an organization can maximize developer productivity by employing turnkey, cloud-agnostic developer solutions, which are operationalized and ready for the Enterprise.  The developer can write their application once and run it anywhere, without tweaks or edits required to suit a new cloud provider’s stack.

At the other end of the application scale, most Enterprise organizations own a significant portion of non-cloud-ready, non-virtualized workloads such as bare-metal and unstructured data. Through Dell Technologies extensive portfolio, these workloads are fully supported on various platforms and never considered an afterthought.

Likewise, a critical element of any organization’s cloud investment is its strategy around cloud data protection. Dell Technologies Data Protection Solutions covers all hybrid cloud protection use cases from in-cloud backup, backup between clouds, long-term retention to the cloud, DR to cloud and cloud-native protection.

Increased Agility, Improved Economics & Reduced Risk


Ultimately, Dell Technologies Cloud delivers increased business agility through self-service, automation and the unique proposition of true portability across private, public and edge locations. This agile, flexible and resilient foundation can enable Enterprise organizations to accelerate software innovation and in turn, quicken time-to-market.

Dell EMC Exam Prep, Dell EMC Tutorial and Materials, Dell EMC Learning, Dell EMC Study Materials

In addition to the business agility and workload mobility gained from this consistent hybrid cloud model, companies can also improve cloud economics, as well as leverage multiple consumption options, irrespective of cloud location. Any modernization offers mitigation of business risk through  eliminating technical debt, minimizing operational complexities and bypassing unknown and inconsistent future financials.