Tuesday, 31 March 2020

Creating a User Experience Fit for a Mobile Workforce

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Cert Exam

Because of the international nature of my work, I’m often traveling for four days out of every week, and I might connect from as many as ten different places a day. Between 6 a.m. and 9 p.m., I use internet connections at airports, coffee bars and hotel lobbies, and the amount of time I spend online in each case can vary between five minutes and two hours.

In fact, every employee is a ‘consumer of the workplace,’ no matter where they are in the world, and they can all benefit from an improved experience. Fast connectivity and easy access to email servers and collaboration software from any location are paramount for efficiency.

Remote working is on the rise


As the world becomes more complex, we must leverage simple solutions to meet new challenges. The reality today is that more than two-thirds of people around the world work remotely at least once a week, networked teams are replacing the static workforce and new generations entering the job market expect a mobile work environment. Nearly 70 percent of organizations acknowledge that a mobile workforce enables business strategies.

To guarantee a fluid user experience, the mobile workforce depends on easy-to-use B2B applications and a fast and accessible internet connection. However, the wide range of consumer applications people use in their everyday personal lives usually offer a better user interface than B2B applications designed for ‘consumers of the workplace.’ As a result, the average employee is inefficiently switching between 35 job-critical applications more than 1,100 times every day.

We shouldn’t cut corners when it comes to the user experience. A well-designed user interface can increase the conversion rate by 200 percent for B2C websites, so imagine what an easy-to-use business application could do for the employee experience. So let’s take a look at the possibilities for improving this experience and attracting the next generations to our workplace of the future.

Flexible work enhances collaboration


Whether working remotely with a colleague in another country or facilitating a brainstorm meeting with 50 people at the office, advancements such as 5G, collaboration apps and conference software offer not only a greater number but also more efficient ways of working together, adapted to what customers expect from companies.

As products evolve, companies have to adapt their service offering – internally as well as externally.

Sandvik, a high-tech engineering group that offers tooling and tooling systems for metal cutting, needed to gain a better understanding of its internal users, because people are increasingly working remotely, around the world, both in the offices and from inside mines. Sandvik now takes a completely new approach to digital happiness: offering a customized experience without offering too many options, enabling a sense of empowerment for IT as well as for the end user.

The ability to work from different locations is critical. Flexible work is beneficial for companies, it motivates people to take the next step in their career and it definitely benefits the environment.

Flexibility puts employees more in charge of their commute time, their way of working and their work-life balance. This is exactly what Shiseido, a worldwide cosmetic company, had in mind when it transformed the workplace. Half of the workforce, a total of 2,000 people, are now working remotely. Together with Shiseido we developed a custom-made workplace solution giving the right person and the right device access to the data needed to work securely from any location.

Installing a culture of trust


Investments in technology are not the only prerequisite for facilitating distributed teams or remote working. The remote workforce also has to be supported by HR. People need to be comfortable with this way of working. Top-down decisions enable processes that install a culture of trust in the company.

Promoting collaboration through technology starts at the office. Companies that constantly evaluate what they are offering their employees, while aligning their technology with employees’ needs, are surging ahead in the race for talent. The workplace of the future is tailored to the specific needs of a diverse workforce. There is still plenty of room to learn and improve in terms of interfaces and information-sharing tools.

Technology enhances creative ways of working by making it easier to collaborate with people all over the world, but only if it benefits the user experience. A completely new and broader skills force is available to work on the ever-more complex challenges people face today.

Source: dellemc.com

Monday, 30 March 2020

Data Protection Complexity – The Enemy of Innovation

Few would argue that organizations are becoming increasingly dependent on data to fuel innovation, drive new revenue streams and provide deep insight into the needs of their customers, partners and stakeholders. Yet despite the increased investments to safeguard data and keep mission-critical application services highly available across multi-cloud environments, we are witnessing a steady uptick in application downtime and data loss across organizations of all sizes. This is resulting in lost revenue, lost employee productivity and lost opportunities.

According to the 2020 Global Data Protection Index (GDPI) snapshot survey, 82 percent of the respondents reported experiencing a disruptive event in the past 12 months – meaning they experienced downtime, data loss or both – up from 76 percent in the 2019 GDPI survey. The average annual costs of data loss in this 12-month period exceeded $1M, slightly higher from the year prior, while the costs of downtime surged by 54 percent, $810k in 2019 vs $520k in 2018.

What’s behind this trend of persistent digital disruption? Certainly, the ongoing proliferation of data and application services across edge locations, core data center and multi-cloud environments is making it very challenging for IT organizations to ensure data is continuously protected, compliant and secure.

Compounding IT complexity is the continued evolution of application services themselves. Organizations are moving toward faster, more agile ways of deploying applications to market – containers, SaaS and cloud-native applications are altering the dynamics of how data is protected.

Likewise, distributed edge technologies like IoT are driving unprecedented data volumes as smart cities, autonomous vehicles, medical devices and sensors of virtually every kind imaginable are capturing troves of information across the digital landscape. In the not-so-distant future, there will be more data residing in edge locations than in all of the public clouds combined – placing inordinate strains on data protection infrastructure and IT teams to efficiently manage, protect and secure this information.

To illustrate how data protection complexity can have an outsized impact on downtime, data loss and their associated costs, consider the following data points from the GDPI research:

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Learning, Dell EMC Certifications

Eighty percent of the survey respondents reported they were using solutions from multiple data protection vendors. The irony is that these organizations are likely investing more in time, money and staffing resources to protect their data and applications, yet their annual data loss and downtime costs are significantly higher than organizations working with a single data protection vendor.

Moreover, the majority of respondents indicated a lack of confidence in their solutions to help them recover data following a cyber-attack, adhere to compliance regulations, meet application service levels and prepare them for future data protection business requirements. It’s no surprise, then, that two-thirds of organizations are concerned that they will continue to experience disruption over the next 12 months.

To combat data protection complexity, minimize disruption and mitigate the risk of data loss and downtime, organizations need simple, reliable, efficient, scalable and more automated solutions for protecting applications and data regardless of the platform (physical, virtual, containers, cloud-native, SaaS) or of the environments that workloads are deployed into (edge, core, multi-cloud).

These solutions also need to help organizations ensure compliance and enhance data security across hybrid, multi-cloud infrastructure. And they need to provide the global scale that organizations need as their application workloads and data volumes exponentially increase in the coming years.

By delivering a deep portfolio of data protection solutions that address the need for traditional and modern workloads across edge, core and multi-cloud environments, Dell Technologies provides proven and modern data protection that delivers simplified, efficient, and reliable protection and recovery of applications and data, while ensuring compliance and mitigating the risks of data loss through our integrated cyber resiliency capabilities.

Our industry-leading data protection software and integrated data protection appliances are leveraged by our customers to protect critical data assets on-premises and in the cloud. Today, for example, we are protecting over 2.7 exabytes of data for over 1,000 customers in the public cloud.

And we continue to double down on our investments in data protection to deliver the most innovative data protection solutions available in the market. Recently we announced the support for protecting Kubernetes containers on PowerProtect Data Manager, enabling our customers to accelerate innovation by seamlessly protecting critical data deployed in containers.

And this is just the beginning. Our agile development engine is primed to release a steady stream of new data protection capabilities every quarter to enable our customers to protect and safeguard their critical data assets however and wherever they are deployed across edge, core and multi-cloud environments.

As the industry leader in data protection software and integrated data protection appliances, Dell Technologies is committed to delivering the end-to-end, innovative data protection solutions our customers need to eliminate data protection complexity and break the cycle of digital disruption to help them transform now and well into the future.

Sunday, 29 March 2020

Enhanced Data Center Visibility with Dell EMC Storage Resource Manager

Over the years, as data center complexity has increased to support the volume, variety, velocity and veracity of data in the enterprise, organizations often find it challenging to get a clear understanding of how infrastructure is performing. Visibility into data center performance is important to understand how resources are used and it provides an opportunity to improve capacity and resource planning. This enables storage admins to address growing business requirements with agility.

Storage admins who love the data center visibility that they get with Dell EMC Storage Resource Manager (SRM) will be delighted to know that we are taking SRM further forward with our latest release – SRM 4.4.

What is SRM?


SRM is our on-premises storage monitoring and reporting tool that works with Dell EMC storage and supports many third-party data center products. With SRM, storage admins can monitor applications, servers, SAN and storage from a single pane of glass, enabling an enterprise wide view of their entire data center environment. SRM provides information on capacity planning, performance troubleshooting, workload placement, configuration compliance, charge-back, proactive alerting and newly simplified custom reporting. With the release of version 4.4, SRM now includes some exciting and innovative enhancements that make it easier to find and solve problems providing even more comprehensive support across your entire data center.

Enhanced Troubleshooting and Alerting


SRM’s Event Correlation feature helps identify the root cause of a performance issue by looking at related health/configuration events in the context of Key Performance Indicators (KPIs). With SRM’s improved Performance Analysis Dashboard you can easily identify overloaded shared components and see details on bully consumers and impacted victims from the details of connected hosts.  This will help to address the ‘noisy neighbors’ kind of scenarios. In addition to providing alerts for congestion issues and associated recommendations, the SRM SAN congestion troubleshooting feature is now enhanced to provide a report detailing ports affected with the slow drain condition.

Even More Comprehensive Third-Party Support


SRM 4.4 now supports VMware vSAN which is a foundational component of VMware Cloud Foundation and it’s the storage component of VxRail and vSAN Ready Nodes. SRM Reports on vSAN enabled cluster resources, host systems and datastore objects. In addition, SRM monitors vSAN problems for health, capacity, configuration and performance. As a comprehensive reporting tool, SRM 4.4 has extended support of third-party storage arrays for discovery and monitoring with reports on inventory, capacity and performance metrics.

New Cost-Savings Dashboard


SRM can help you optimize resource utilization by identifying cost-savings opportunities in the storage infrastructure from VMs powered offs to LUNs with no IOPS, including non-utilized switch ports and many other opportunities.

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Guides
SRM Potential Savings Dashboard

New Simplified Customizable Reporting


SRM now provides an easy-to-use wizard to build custom reports. The wizard provides:

◉ Easy to understand storage model by exposing data through Virtual Tables

◉ Less time in building custom reports by using metrics with pre-defined settings

◉ Simplified workflows with drag-and-drop operations

In addition, SRM 4.4 provides numerous other enhancements ranging from SRM REST APIs for changing device passwords to platform specific enhancements, and supports the latest platform versions.

Saturday, 28 March 2020

Who’s Holding Your Data Wallet?

The volume of data created by today’s enterprise workloads continues to grow exponentially. Data growth combined with advancements in artificial intelligence, machine learning, and containerized application platforms, creates a real challenge supporting critical business requirements. This can really place heavy demand on your infrastructure. Adaptability and agility means having the right resources to service ever changing needs. Performing at scale while keeping up with data growth to deliver business critical outcomes comes from a well architected solution that comprehends all the functional ingredients: networking, storage, compute, virtualization, automated lifecycle management, and most importantly the applications. It also comes from a close partnership between customers and technology suppliers to understand the business drivers needed to deliver a best in class outcome.

Would you ask a stranger to hold your wallet full of cash? Metaphorically speaking, this might be what you’re asking an emerging technology vendor or a startup in the storage space to do if you hand over your key data currency. You might be willing to take a chance on a new pizza delivery service, but I bet you would think differently if someone came to your house to collect all your data.

We respect the innovation that emerging technologies and startups bring. However, when it comes to your most valuable asset – data – it’s important to partner with a vendor with a proven track record of leadership and experience who will be there for you well into the future. One such example is the Dell EMC VxFlex software-defined storage (SDS) platform, which offers customers the kind of predictable scalable performance required to host their critical application workloads and data storage in a unified fabric.

The VxFlex platform is capable of growing compute or storage independently, or in an HCI configuration with linear incremental performance while sustaining sub-millisecond latency. No matter what deployment model you need today or in the future, VxFlex provides the flexibility and non-disruptive upgrade path to host any combination of workloads, without physical cluster segmentation, that scales modularly by the node or by the rack. Whether you need to support conventional Windows and Linux applications or next generation digital transformation initiatives, VxFlex helps you reduce the risk associated with future infrastructure needs.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Exam Prep

VxFlex can handle your most critical and demanding workloads in a full end-to-end lifecycle managed system using an adaptable myriad of hypervisors, bare metal, or container technology combinations to meet or exceed your requirements. A great example of VxFlex at work is the Dell EMC VxFlex solution for Microsoft SQL Server 2019 Big Data Clusters, which deploys a future-proof design that improves business outcomes through better analytics. This solution highlights the use of persistent storage for Kubernetes deployments and performance sensitive database workloads using a unified compute, networking and systems management infrastructure that makes it operationally complete. VxFlex software-defined architecture provides an agile means to blend changing workloads and abstraction models that can adjust as workload demands change.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Exam Prep

Dell Technologies is a market leader across every major infrastructure category and enables you to proactively leverage technology for competitive advantage. Dell Technologies gives you the ability to drive your business and not be driven by technology.

Friday, 27 March 2020

Starting Your Career in Data Science: Reasons to Achieve Goal

dell emc, dell emc data science associate, data science associate, dell emc data science associate Certification cost, emc: data science associate, emc data science associate Certification, emc data science associate practice Exam, dell emc data science associate Certification, emc data science associate professional Certification, dell data science associate, dea-7tt2, dea-7tt2 associate - data science and big data analytics v2 Exam, data science Certification, data science Exam, E20-065 pdf, E20-065 books, E20-065 tutorial, E20-065 syllabus, E20-065, E20-065 Questions, E20-065 Sample Questions, E20-065 Questions and Answers, E20-065 Test, E20-065 Practice Test, dea-7tt2 Questions, dea-7tt2 PDF, dea-7tt2 Sample Questions, dea-7tt2 Practice Test

What is Data Science?

To put in simple terms, Data Science is the subject of data. Here it uses some advanced algorithms and scientific methods to collect, store, and analyze a vast set of data to extract useful information- both structured and unstructured effectively.

Say, for example, you type, “cute baby video” in Google. Immediately as you search in google’s search engine, Google collects for you the best results in the search engine results page.

Furthermore, how does Google develop this list? Here is what data science comes into the picture. Google makes usage of some advanced data science algorithms to give you the best of search results (i.e., useful information). Therefore, you can now watch cute baby videos! Thanks to Data Science.

What Does a Data Scientist Do?

Most data scientist in the industry have advanced and training in statistics, math, and computer science. Their experience is a vast horizon that also extends to data visualization, data mining, and learning management.

It is almost common for them to have previous knowledge in infrastructure design, cloud computing, and data warehousing.

Here are Some Advantages of Data Science in Business:

  • Mitigating danger and fraud. Data scientists are trained to identify data that stands out in some way. They create statistical, network, path, and big data methodologies for predictive fraud propensity patterns and use those to develop signals that help ensure timely responses when unusual data is recognized.
  • Delivering relevant products. One of the advantages of data science is that organizations can find when and where their products sell most useful. This can help deliver the right products at the right time—and can help companies develop new products to meet their customers’ requirements.
  • Personalized customer experiences. One of the most buzz worthy advantages of data science is the ability for sales and marketing teams to understand their audience on a very granular level. With this knowledge, an organization can create the best possible customer experiences.

Reasons Behind Choosing Data Science Certification

1. Enabling Management and Officers to Make Better Decisions

An experienced data scientist is possible to be a trusted advisor and strategic partner to the organization’s upper management by ensuring that the staff maximizes their analytics skills.
A data scientist communicates and demonstrates the value of the institution’s data to facilitate enhanced decision-making processes across the entire organization, through measuring, tracking, and recording performance metrics and other knowledge.

2. Guidance Actions Based on Trends-Which in Turn Help to Define Goals

A data scientist analyzes and explores the organization’s data, after which they recommend and prescribe specific actions that will help increase the institution’s performance, better engage customers, and ultimately increase profitability.

3. Stimulating the Staff to Adopt Best Practices and Focus on Issues That Matter

One of the reliabilities of a data scientist is to ensure that the staff is familiar and well-versed with the organization’s analytics product. They prepare the team for success with the demonstration of the efficient use of the system to extract insights and drive action.

Once the staff knows the product capabilities, their focus can move to address key business challenges.

4. Try to Identify Opportunities

During their communication with the organization’s current analytics system, data scientists question the existing processes and assumptions to improve additional methods and analytical algorithms.
Their job needs them to continuously and continually grow the value that is derived from the organization’s data.

5. Decision Making with Quantifiable, Data-Driven Evidence

With the arrival of data scientists, data gathering, and analyzing from different channels has ruled out the necessity to take high stake risks.
Data scientists create models using existing data that simulate a variety of potential actions-in this way; an organization can learn which path will bring the best business outcomes.

6. Testing These Decisions

Half of the battle involves making individual decisions and implementing those changes. And the other half is crucial to know how those decisions have influenced the organization.
This is where a data scientist comes in. It pays to have someone who can measure the key metrics that are related to essential changes and quantify their progress.
Read:

7. Identification and Refining of Target Audiences

From Google Analytics to consumer surveys, most companies will have at least one cause of customer data that is being collected. But if it is not used well—for instance, to identify demographics—the data is not useful.

The essence of data science is based on the capacity to take existing data that is not significantly useful on its own and combine it with other data points to create insights an organization can utilize to learn more about its customers and audience.

A data scientist can support the identification of the key groups with precision via a thorough analysis of different causes of data. With this in-depth knowledge, organizations can tailor services and products to customer groups and improve profit margins flourish.

8. Recruiting the Right Talent for the Organization

Reading through resumes all day is a daily chore in a recruiter’s life, but that is growing due to big data. With the amount of information available on talent-through social media, corporate databases, and job search websites-data science specialists can work their way through all these data points to find the candidates who entirely fit the organization’s requirements.

By mining the vast amount of data that is already available, in-house processing for resumes and applications—and even sophisticated data-driven aptitude tests and games-data science can support your recruitment team to make speedier and more accurate collections.

Career Benefit of Data Science:

dell emc, dell emc data science associate, data science associate, dell emc data science associate Certification cost, emc data science associate Certification, emc data science associate practice Exam, dell emc data science associate Certification, emc data science associate professional Certification, dell data science associate, dea-7tt2, dea-7tt2 associate - data science and big data analytics v2 Exam, data science Certification, data science Exam, E20-065 pdf, E20-065 books, E20-065 tutorial, E20-065 syllabus, E20-065, E20-065 Questions, E20-065 Sample Questions, E20-065 Questions and Answers, E20-065 Test, E20-065 Practice Test, dea-7tt2 Questions, dea-7tt2 PDF, dea-7tt2 Sample Questions, dea-7tt2 Practice Test
Career Benefit Of Data Science

Final Words:

Data science can add value to any business who can use their data well. You have a complete future by learning data science.

Markedly, little prior knowledge of computer science is sufficient to start a career in data science jobs.

Thursday, 26 March 2020

Building One of the World’s First MEC Solutions

Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Certifications

While Multi-Access Edge Computing (MEC) is not dependent on 5G, and 5G is not dependent on MEC, there are clear synergies between the 5G architecture enabling decentralized deployment and MEC enabling new services and experiences. MEC will be a key enabler of future growth for telecoms operators as they roll out 5G services in the next few years.

Although the discussion on edge computing is clouded by hype and at times misinformation, we are starting to see real examples of edge computing being used across industries. Mobile operators globally are building MEC sites – both in their own facilities and on customer premises – launching new services to enterprise customers.  Service providers (SPs) like AT&T, SK Telecom (SKT), SingTel, KDDI, Vodafone and Deutsche Telekom are some of the early providers of edge solutions.

Dell Technologies is working with SK Telecom to help the operator build and deploy its own MEC platform in 2020, accelerating the industry leader’s 5G MEC strategy.

SK Telecom: a 5G and MEC pioneer


SK Telecom is the largest mobile operator in Korea and an industry leader in many respects: it has both a strong core offering with over 41% market share (as of July 2019) and significant revenues outside telecoms, primarily from security and commerce. (USD $7.9Bn revenues in FY 2018).  It was the first to launch 5G in April 2019 and managed to accumulate over 1 Million 5G subscribers 8 months later, 10 times the total number of 5G subscribers in the U.S. across all operators in the same period.  They have achieved the widest 5G coverage in South Korea rolling out its 5G network to data traffic-concentrated areas, including the main areas of 85 cities nationwide.  Their ambition is to continue the growth of 5G services in the B2B and B2B2X sectors, offering low latency solutions to enterprises (e.g. security), industry (e.g. smart factory) and developers (e.g. cloud gaming).

However, 5G alone is not enough. MEC is a critical enabler for delivering low latency services at guaranteed levels, data-centric services (such as IoT), differentiated customer experiences, improved security and reduced TCO to the end-customer.  In traditional networks, round-trip latency to the cloud generally averages between 30 to over 100 milliseconds. With MEC, this could potentially drop to under 10 milliseconds.

SKT is offering two types of MEC: distributed MEC leveraging its own facilities and on-premises MEC, where edge compute infrastructure and services are deployed for a customer on their selected sites. The target applications across these MEC domains include:

◉ Distributed MEC:
     ◉ Cloud VR
     ◉ Virtual mobile interface
     ◉ Traffic management
     ◉ Cloud gaming
◉ On-site MEC:
     ◉ Smart factory (machine vision)
     ◉ Smart hospital
     ◉ Smart robot

With MEC, collaboration is key

SKT has an open approach to developing its MEC offering, emphasising the need to work closely with different partners in the ecosystem:

1. Infrastructure vendors, including Dell Technologies and Intel
2. Software vendors
3. Public cloud providers
4. Global SPs

SKT has already announced its partnership with AWS for Wavelength, its 5G MEC offering that will launch in 2020. The mobile operator views cooperation with public cloud providers as a critical component to seed the market and allow developers to add workloads to MEC or move them from the public cloud.

A recent announcement on SKT’s initiative with the Bridge Alliance demonstrates the need to collaborate within the telecoms industry. The Global MEC Task Force includes other Bridge Alliance members (e.g. Singtel, Globe, Taiwan Mobile and PCCW Global) and seeks to accelerate progress of 5G and MEC across SPs globally. The Bridge Alliance alone encompasses 34 operators serving more than 800 million customers across the Asia Pacific region.

Dell Technologies provides the foundation for the MEC architecture


In addition to working with cloud providers, SKT has built its own MEC architecture to provide a platform with diverse environments for MEC application workloads to run on bare metal, virtual machines and containers. Using the platform, developers can manage and orchestrate workloads on SKT’s MEC infrastructure.

SK Telecom’s MEC architecture

Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Certifications
Source: SK Telecom

Underpinning this is Dell Technologies’ infrastructure, offering both best-in-class network switches to manage the real-time traffic flows to and from the MEC nodes and edge servers, hosting the MEC workloads. The Dell Technologies infrastructure also offers the management framework to integrate into SKT’s existing Operational Support Systems (OSS) environment, easing the deployment and day-to-day operations of this network at scale.

Tuesday, 24 March 2020

The 4th Industrial Revolution: Digitally Transformed Workloads

Increasingly, engineers and infrastructure administrators are looking for workload agility. But what exactly is workload agility? First, let’s define workload. Workloads are the computing tasks that employees initiate on systems to complete their companies’ missions. Precision Medicine, Semiconductor Design, Digital Pathology, Connected Mobility, Quantitative Research, and Autonomous Driving are all examples of compute-intensive workloads that are typically deployed on computing infrastructure. “Workload agility,” then, refers to the ability to seamlessly spin up resources to complete any of your companies required compute intensive tasks.

In an ideal world, dedicated and siloed infrastructure is replaced with agile, on-demand infrastructure that automatically reconfigures itself to meet diverse, constantly changing workload requirements. This delivers increased hardware utilization, lower cost, and most importantly, shorter time to market. Since dedicated infrastructure can be cost prohibitive, establishing workload agility by eliminating silos within the data center while dynamically provisioning compute resources for the pending workload helps maximize investment ROI, hardware utilization, and ultimately, fulfill the organization’s mission.

Workloads in most enterprises are asymmetric, requiring GPUs, FPGAs, and/or CPUs in various compute states.  Some workloads rely on extensive troves of data. Other workloads, including Autonomous Driving, run through phases where compute requirements change drastically. The ideal solution for users enables them to schedule tasks seamlessly — without disrupting their co-workers’ jobs — all the while changing compute requirements as needed. The ideal solution for IT administrators delivers complete flexibility, with infrastructure 100 percent utilized to maximize ROI securely and with full traceability.

Time for a Revolution?


The 4th Industrial Revolution, as categorized by the World Economic Forum, is the blurring of lines between physical, digital, and biological systems. Integral to blurring the lines is the increasingly pervasive “As-A-Service” model. The desired outcome of this model is to bring agility (and performance) to workloads independent of the underlying hardware. This delivers a user-experience where no understanding of the formal “plumbing” of the infrastructure is required. Whether performing hardware-in-the-loop testing for Autonomous Driving or analyzing pharmacogenomics in Precision medicine, the underlying hardware simply works with high performance. The abstraction of the hardware and infrastructure means that the user interface is all the user needs to understand.

Data Center Clouds?


Key to the blurring of lines is allowing a wide variety of jobs to be performed on-demand – seamlessly and with high performance – regardless of where the actual infrastructure and data is located. To achieve this, on-premises clouds are comprised of a handful of components, including:

◉ A User Interface: The user interface makes the digital become real. Bright Computing, for example, provides a user interface which is in use today by many Dell customers, and is a core component of Dell EMC Ready Solutions.

◉ A Scheduler: The job scheduler is responsible for queuing jobs on the infrastructure and must work in harmony with provisioning and the user interface to assure the right hardware is available at the right time for each job.

◉ Stateless to stateful storage management: A secure way to navigate stateless containers in a stateful storage world is required.

◉ Storage provisioner: Lastly is the ability to do the actual provisioning and de-provisioning of storage on demand via Container Storage Interfaces (CSI).

The Cloud Window to the World


Comprehensive automation and standardization are key to taking the complexity out of provisioning, monitoring and managing clustered IT infrastructure. As an example, automation that builds on-premise and hybrid cloud computing infrastructure that simultaneously hosts different types of workloads while dynamically sharing system resources between them is an area of focus for Bright Computing. With Bright, your computing infrastructure is provisioned from bare metal: networking, security, DNS and user directories are set-up, workload schedulers for HPC and Kubernetes (K8s) are installed and configured, and a single interface for monitoring overall system resource utilization and health is provided – all automatically. An intuitive HTML5 GUI provides a comprehensive management console, complete with usage and system reports, and a powerful command line interface for batch programming, bringing simplicity to flexible infrastructure management. Extending your environment automatically to AWS, Azure and hybrid clouds is also provided — all for the ultimate in workload agility – a dynamically managed, seamless hybrid cloud.

Scheduled for Success


One of the most significant challenges of a truly dynamic and flexible on-demand computing environment is sharing resources between jobs that are under the control of different workload managers. For example, if you want to run HPC jobs under Slurm or PBS Pro and containers on the same shared infrastructure under K8s, you have a problem: Slurm and PBS Pro do not coordinate resource utilization between each other, much less with K8s.  As a result, resources must be split manually — which creates silos that need to be managed separately. Application resource limitations become common and add to the management headache.

Bright software solves this issue with a solution that monitors the demand and capacity of resources assigned across workload managers, and dynamically re-assigns resources between them based on policy. Likewise, it also adds cloud resources to on-premise infrastructure as they become available for some or all workload managers and their jobs.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification

How to Schedule?


Schedulers are a critical element of an agile infrastructure.  Schedulers tend to be vertical specific.  Some industries love K8s. Others are very Slurm or PBS Pro centric. With the recent advent of the K8s for workload movement, workload agility, and virtualized computing, we will focus on this scheduler. It is important, however, that your solution supports multiple schedulers. Bright Computing, for example, works with all the major schedulers as mentioned above and can handle them all in tandem.

K8s focuses on bringing order to the container universe.  Containers of many different flavors are supported by K8s. Docker being the most common. Containers control the resources of the hosts making the sharing of resources possible. K8s orchestrates the containers amongst the hosts to share the resources seamlessly. The advent of Nvidia’s GPU Cloud catalog of AI containers, DockerHub, and other sharing vehicles for workload containers has certainly accelerated container usage for quite a few different workloads. The ability to centralize images, manage resources, and increase repeatability of infrastructure is the story of the rise of containers and the rise of K8s.  As such, the in-cloud and on-premises capability of scheduling jobs with K8s is one way we are achieving workload agility.

A Clever Containment Strategy


Helping K8s to make order of infrastructure is the job of the Container Storage Interface (CSI). The CSI is a standard K8s plugin that helps bring authentication, transparency, and auditing to stateful storage as it’s used by stateless containers. The beauty of the CSI and K8s when combined is that they provide a mechanism to leverage existing infrastructure in an agile way. Dell Technologies recently released an Isilon CSI for K8s. See the link below.

The Sky’s the Limit


K8s is becoming commonplace and Bright Computing makes deploying and managing K8s easier. Workloads can leverage any combination of available FPGA, GPU or CPU compute resources – whether on public or on-prem clouds; applications are made portable and secured within containers with K8s managing workload movement; and centralized and secured storage is available and accessed via application-native protocols, including SMB and NFS. Its single user interface makes deployment and overall management a snap.

All Together


Today the partnership of Dell Technologies and Bright Computing offers a seamless way to effectively schedule jobs, helping eliminate data center silos and leading to workload agility. We have a number of offerings in market today, ranging from reference architectures to Ready Solutions such as HPC Ready Solution and AI Ready Solution.  These solutions are also assembled regularly for customer workloads like Hardware in Loop (HIL) / Software in Loop (SIL) / Simulation / Re-Simulation for Automotive Autonomous Driving.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification

Sunday, 22 March 2020

Demystifying the Relationship Between On-Premises Storage and Public Cloud

Transitioning to cloud is often depicted as a transition toward simplicity. We are inundated with promises of extreme flexibility, unlimited scale, incremental agility, the ability to leverage a wide range of next-gen applications, and “pay by the drip” consumption. However, many organizations transitioning data and workloads to the cloud are coming to terms with the unpleasant reality of access and egress fees, data gravity, latency, vendor lock-in, and even compliance and control issues. In a recent survey conducted by ESG, 55 percent of storage decision-makers reported moving at least one workload back from public cloud to an on-premises data center. While it has become clear that cloud might not be as simple as it seems, it still does provide significant value – and simplicity – when utilized to support certain workloads.

Another factor to consider when developing a cloud strategy is the proliferation of DevOps and containers. Container-based workloads are growing in popularity, with 21 percent of organizations surveyed by ESG stating that one of their most significant investment areas is increasing infrastructure capacity to support application development, and 17 percent identifying increasing use of containers as a significant application development investment.

We’ve established that cloud is not a one-size-fits-all destination, it’s actually an operating model that requires a thoughtful approach to balancing workload requirements across on-premises and cloud. Rarely is one single public cloud provider going to satisfy all the requirements of current workloads, let alone the workloads of the future. Workload diversity is only going to expand, making hybrid cloud the new reality. Our customers tell us they are looking for a single cloud experience that includes a simplified view of resources and a consistent experience. Sounds great, but how do you make this happen with so much variation and complexity across data and workloads? And where does Dell EMC Storage fit in?

Dell Technologies Cloud Validated Designs enable you to build hybrid cloud environments using Dell EMC storage, compute and networking that has been validated with VMware Cloud Foundation (VCF). This is essentially a customized approach to building your unique Dell Technologies Cloud by selecting the infrastructure components that align with your requirements, providing a single-pane management view and enabling you to deliver cloud capabilities across a wide range of workloads.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC

This approach also allows you to meet the varied demands of these workloads – including greater performance, capacity and application flexibility – by scaling storage independently from compute. Additionally, you have the option to utilize unique enterprise data services such as inline data reduction capabilities, synchronous replication and more. Dell EMC storage options include PowerMax – ideal for mission critical workloads and massive consolidation to lower TCO, and Unity XT – simple, unified, flexible, all-inclusive midrange storage.

If you are looking for an alternative to building a comprehensive Dell Technologies Cloud solution in your data center, Dell EMC Cloud Storage Services provides the flexibility to directly connect Dell EMC storage consumed as a service to the public cloud provider of choice – including AWS, VMware Cloud on AWS, Microsoft Azure and Google Cloud Platform.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC

Cloud Storage Services uses a high-speed, low latency connection while keeping data independent of the cloud. This allows you to leverage compute and services from multiple clouds simultaneously or switch between them based on application needs without having to move data, keeping you in control and eliminating cloud vendor lock-in. The service utilizes native array-based replication to move data to the cloud with the ability to connect a single storage volume to multiple clouds and switch between them as needed. Cloud Storage Services also provide the option to use cloud as a low-cost disaster recovery option where you don’t need to set up and manage secondary site. Similar to Dell Technologies Cloud Validated Designs, this service is available for Dell EMC PowerMax and Unity XT.

Source: dellemc.com

Saturday, 21 March 2020

Paving the way for the Next Data Decade

It’s hard to believe that we’re heading into the year 2020 – a year that many have marked as a milestone in technology. Autonomous cars lining our streets, virtual assistants predicting our needs and taking our requests, connected and intelligent everything across every industry.

When I stop to think about what has been accomplished over the last decade – it’s quite remarkable.  While we don’t have fully autonomous cars zipping back and forth across major freeways with ease, automakers are getting closer to deploying autonomous fleets in the next few years. Many of the every-day devices, systems and applications we use are connected and intelligent – including healthcare applications, industrial machines and financial systems – forming what is now deemed as “the edge.”

At the root of all that innovation and advancement are massive amounts of data and compute power, and the capacity across edge, cloud and core data center infrastructure to put data through its paces. And with the amount of data coming our way in the next 10 years – we can only imagine what the world around us will look like in 2030, with apps and services we haven’t even thought of yet.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Exam Prep

2020 marks the beginning of what we at Dell Technologies are calling the Next Data Decade, and we are no doubt entering this era with new – and rather high – expectations of what technology can make possible for how we live, work and play. So what new breakthroughs and technology trends will set the tone for what’s to come over the next 10 years? Here are my top predictions for the year ahead.

2020 proves it’s time to keep IT simple


We’ve got a lot of data on our hands…big data, meta data, structured and unstructured data – data living in clouds, in devices at the edge, in core data centers…it’s everywhere. But organizations are struggling to ensure the right data is moving to the right place at the right time. They lack data visibility – the ability for IT teams to quickly access and analyze the right data – because there are too many systems and services woven throughout their IT infrastructure. As we kick off 2020, CIOs will make data visibility a top IT imperative because after all, data is what makes the flywheel of innovation spin.

We’ll see organizations accelerate their digital transformation by simplifying and automating their IT infrastructure and consolidating systems and services into holistic solutions that enable more control and clarity. Consistency in architectures, orchestration and service agreements will open new doors for data management – and that ultimately gives data the ability be used as part of AI and Machine Learning to fuel IT automation.  And all of that enables better, faster business outcomes that the innovation of the next decade will thrive on.

Cloud co-existence sees rolling thunder


The idea that public and private clouds can and will co-exist becomes a clear reality in 2020. Multi-cloud IT strategies supported by hybrid cloud architectures will play a key role in ensuing organizations have better data management and visibility, while also ensuring that their data remains accessible and secure.  In fact, IDC predicted that by 2021, over 90% of enterprises worldwide will rely on a mix of on-premises/dedicated private clouds, several public clouds, and legacy platforms to meet their infrastructure needs.

But private clouds won’t simply exist within the heart of the data center. As 5G and edge deployments continue to rollout, private hybrid clouds will exist at the edge to ensure the real-time visibility and management of data everywhere it lives. That means organizations will expect more of their cloud and service providers to ensure they can support their hybrid cloud demands across all environments. Further, we’ll see security and data protection become deeply integrated as part of hybrid cloud environments, notably where containers and Kubernetes continue to gain momentum for app development. Bolting security measures onto cloud infrastructure will be a non-starter…it’s got to be inherently built into the fiber of the overall data management strategy edge to core to cloud.

What you get is what you pay


One of the biggest hurdles for IT decision makers driving transformation is resources. CapEx and OpEx can often be limiting factors when trying to plan and predict for compute and consumption needs for the year ahead…never mind the next three-five years. SaaS and cloud consumption models have increased in adoption and popularity, providing organizations with the flexibility to pay for what they use, as they go.

In 2020, flexible consumption and as-a-service options will accelerate rapidly as organizations seize the opportunity to transform into software-defined and cloud-enabled IT. As a result – they’ll be able to choose the right economic model for their business to take advantage of end-to-end IT solutions that enable data mobility and visibility, and crunch even the most intensive AI and Machine Learning workloads when needed.

“The Edge” rapidly expands into the enterprise


The “Edge” continues to evolve – with many working hard to define exactly what it is and where it exists.   Once limited to the Internet of Things (IoT), it’s hard to find any systems, applications, services – people and places – that aren’t connected. The edge is emerging in many places and it’s going to expand with enterprise organizations leading the way, delivering the IT infrastructure to support it.

5G connectivity is creating new use cases and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become a core thread of a holistic IT infrastructure solution – ensuring massive data workloads can travel at speed – securely – between edge, core and cloud environments. Open networking solutions will prevail over proprietary as organizations recognize the only way to successfully manage and secure data for the long haul requires the flexibility and agility that only open software defined networking can deliver.

Intelligent devices change the way you work and collaborate


PC innovation continues to push new boundaries every year – screens are more immersive and bigger than ever, yet the form factor becomes smaller and thinner. But more and more, it’s what is running at the heart of that PC that is more transformational than ever. Software applications that use AI and machine learning create systems that now know where and when to optimize power and compute based on your usage patterns. With biometrics, PCs know it’s you from the moment you gaze at the screen. And now, AI and machine learning applications are smart enough to give your system the ability to dial up the sound and color based on the content you’re watching or the game you’re playing.

Over the next year, these advancements in AI and machine learning will turn our PCs into even smarter and more collaborative companions. They’ll have the ability to optimize power and battery life for our most productive moments – and even become self-sufficient machines that can self-heal and self-advocate for repair – reducing the burden on the user and of course, reducing the number of IT incidents filed. That’s a huge increase in happiness and productivity for both the end users and the IT groups that support them.

Innovating with integrity, sourcing sustainably


Sustainable innovation will continue to take center stage, as organizations like ours want to ensure the impact they have in the world doesn’t come with a dangerous one on the planet. Greater investments in reuse and recycling for closed-loop innovation will accelerate – hardware becomes smaller and more efficient and built with recycled and reclaimed goods – minimizing eWaste and maximizing already existing materials. At Dell Technologies, we met our Legacy of Good 2020 goals ahead of schedule – so we’ve retired them and set new goals for 2030 to recycle an equivalent product for every product a customer buys, lead the circular economy with more than half of all product content being made from recycled or renewable material, and use 100% recycled or renewable material in all packaging.

As we enter the Next Data Decade, I’m optimistic and excited about what the future holds. The steps our customers will take in the next year to get the most out of their data will set forth new breakthroughs in technology that everyone will experience in some way – whether it’s a more powerful device, faster medical treatment, more accessible education, less waste and cleaner air. And before we know it, we’ll be looking forward to what the following 10 years will have in store.

Source: dell.com

Thursday, 19 March 2020

Server Disaggregation: Sometimes the Sum of the Parts Is Greater Than the Whole

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Certifications, Dell EMC Exam Prep

The notion of “the whole being greater than the sum of its parts” is true for many implementations of technology. Take, for example, hyper-converged infrastructure (HCI) solutions like the Dell EMC VxRail. HCI combines virtualization software and software defined storage with industry standard servers. It ties these components together with orchestration and infrastructure management software to deliver a combined solution that provides operational and deployment efficiencies that, for many classes of users, would not be possible if the components were delivered separately.

However, certain challenges require separating out the parts – that’s where the solution is found. And, that is true in the case of Server Disaggregation and the potential benefits such an architecture can provide.

So, what is Server Disaggregation? It’s the idea that for data centers of a certain size, efficiencies of servers can be improved by dissecting the traditional servers’ components and grouping like components into resource pools. Once pooled, a physical server can be aggregated (i.e., built) by drawing resources on the fly, optimally sized for the application it will run. The benefits of this model are best described by examining a little history.

B.V.E. (Before the Virtualization Era)


Before virtualization became prevalent, enterprise applications were typically assigned to physical servers in a one-to-one mapping. To prevent unexpected interactions between the programs, such as one misbehaving program consuming all the bandwidth of a server component and starving the other programs, it was common to give critical enterprise applications their own dedicated server hardware.

Figure 1 describes this model. Figure 1 (a) illustrates a concept physical server with its resources separated by class type: CPU, SCM, GPU and FPGA, Network, Storage. Figure 1 (b) shows a hypothetical application deployed on the server and shows the portion of the resources the application consumed. Figure 1 (c) calls out the portion of the server’s resources that were underutilized by the application.

Figure 1 (c) highlights the problem with this model, overprovisioning.  The underutilized resources were the result of overprovisioning of the server hardware for the application to be run. Servers were overprovisioned for a variety of reasons including lack of knowledge of the application’s resource needs, fear of possible dynamic changes in workload, and to account for anticipated application or dataset growth overtime. Overprovisioning was the result of a “better safe than sorry” mindset, which was not necessarily bad philosophy when dealing with mission critical enterprise applications. However, this model had its costs (e.g., higher acquisition costs, greater power consumption, etc.). Also, because the sizing of multiple servers for applications was done when the servers were acquired, a certain amount of configuration agility was removed as more knowledge about the true resource needs of the applications was learned. Before virtualization, data center server utilizations could be as low as 15% or less.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Certifications, Dell EMC Exam Prep

Figure 1: Enterprise Application Deployment before Virtualization

The Virtualization Age


When virtualization first started to appear in data centers, one of its biggest value propositions was to increase server utilizations. (Although, many people would say, and I would agree, that equally important are the operational features that virtualization environments like VMware vSphere provide. Features like live-migration, snapshots and rapid deployment of applications, to name a few.) Figure 2 shows how hypervisors increased server utilizations by allowing multiple enterprise applications to share the same physical server hardware. After virtualization was introduced to the data center server utilizations could climb to 50% to 70%.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Certifications, Dell EMC Exam Prep

Figure 2: Enterprise Application Deployment after Virtualization

Disaggregation: A Server Evolution under Development


While the improvement of utilization brought by virtualization is impressive, the amount of unutilized or underutilized resources trapped on each server starts to add up quickly. In a virtual server farm, the data center could have the equivalent of one idle server for every one to three servers deployed.

The goals of Server Disaggregation are to further improve the utilization of data center server resources and to add to operational efficiency and agility. Figure 3 illustrates the Server Disaggregation concept. In the fully disaggregated server model, resources typically found in servers are grouped together into common resource pools. The pools are connected by one or more high-speed, high-bandwidth, low latency fabrics. A software entity, called the Server Builder in this example, is responsible for managing the pooled resources and rack scale fabric.

When an administrator or a higher-level orchestration engine needs a server for a specific application, it sends a request to the Server Builder with the characteristics of the needed server (e.g., CPU, DRAM, persistent memory (SCM), network, and storage requirements). The Server Builder draws the necessary resources from the resource pools and configures the rack scale fabric to connect the resources together. The result is a disaggregated server as shown in Figure 3 (a), a full bare-metal, bootable server ready for the installation of an operating system, hypervisor and/or application.

The process can be repeated if the required unassigned resources remain in the pools, allowing new servers to be created and customized to the application to be installed. From the OS, hypervisor or application point of view, the disaggregated server is undistinguishable from a traditional server, although with several added benefits that will be described in the next section. In this sense, disaggregation is an evolution of server architecture, not a revolution as it does not require a refactoring of the existing software ecosystem.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Certifications, Dell EMC Exam Prep

Figure 3: Disaggregated Servers

The Benefits of Being Apart


While having all the capabilities of a traditional server, the disaggregated server has many benefits:

◉ Configuration Optimization: The Server Builder can deliver a disaggregated server specifically composed of the resources a given application requires.

◉ Liberation of Unused Resources: Unused resources are no longer trapped within the traditional server chassis. These resources are now available to all disaggregated servers for capability expansion or to be used for the creation of additional servers (see Figure 3 (b)).

◉ Less Need to Overprovision: Because resources can be dynamically and independently added to a disaggregated server, there will be less temptation to use a larger than needed server during initial deployment. Also, since unused resources are available to all existing and future configurations, spare capacity can be managed from a data center level instead of a per server level, enabling a smaller amount of reserved resources to provide the overflow capacity to more servers.

◉ Independent Acquisition of Resources: Resources can be purchased independently and added separately to their respective pools.

◉ Increased RAS (Reliability, Availability and Serviceability): High-availability can be added to server resources where it was not possible or economical to do so before. For example, the rack scale fabric can be designed to add redundant paths to resources. Also, when a CPU resource fails, the other resources can be remapped to a new CPU resource and the disaggregated server rebooted.

◉ Increased Agility through Repurposing: When a disaggregated server is retired, its resources return to the pool which in turn can be reused in new disaggregated servers. Also, as application loads change, disaggregated servers devoted to one application cluster can be reformed and dedicated to another application cluster with different resource requirements

The above list is not exhaustive and many other benefits of this architecture exist.

The Challenges (and Opportunities) of a Long(ish)-Distant Relationship


Full server disaggregation is not here yet and the concept is under development. For it to be possible, an extremely low-latency fabric is required to allow the components to be separated at the rack level. The fabric also needs to support memory semantics to be able to disaggregate SCM (Storage Class Memory). It remains to be seen if all DRAM can be disaggregated from the CPU, but I believe that large portions can depending on the requirements of the different classes of data used by an application. Fortunately, the industry is already developing an open standard for a fabric which is perfect for full disaggregation, Gen-Z. 

The software that controls resources and configures disaggregated servers, the Server Builder, needs to be developed. It also provides opportunities for the addition of monitoring and metric collection that can be used to dynamically manage resources in ways that were not possible with the traditional server model.

Another opportunity is the tying together of the disaggregated server infrastructure with the existing orchestration ecosystems. Server Disaggregation is in no way a competitor to existing orchestration architectures like virtualization. On the contrary, Server Disaggregation is enhancing the traditional server architecture that these orchestration environments already use.

One can imagine that the management utilities administrators use to control their orchestration environments could be augmented to communicate directly to the Server Builder to create the servers they need.  The administrator may not ever need to interface directly to the Server Builder. The benefits of disaggregation should be additive to the benefits of the orchestration environments.