Thursday, 30 April 2020

3 Data Center Best Practices Every Mid-Market Organization Should Follow

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Cert Prep

In today’s digital world, businesses are built on data. That data has value not only to the organizations that house it, but also to external and internal threats. In order to ensure that your business has the digital services it needs, you need trusted infrastructure. Research by ESG and Dell shows the return on investment, as well as risk reduction, that is obtained from running a trusted data center is significant. On the spectrum of Leader and Laggard IT organizations, 92 percent of leaders surveyed reported that investments in infrastructure technologies to maximize uptime and availability and minimize security risk have met or exceeded ROI forecasts.

Mid-market organizations must quickly respond to changing business needs in order to get ahead of the competition when everyone is ‘always-on.’ How do companies maintain trusted data centers and compete to become the enterprises of tomorrow while also managing IT budgets very closely? The answer is in efficient solutions that enable businesses to do more with less and securely extend the value of their investments. Brands must also have the confidence and peace of mind that vital business data is protected and recoverable no matter where it resides.

Why does leading in data center trust matter? The cost of being less secure is high. Surveyed firms estimate that their average hourly downtime due to security breaches cost is $30,000 to $38,000. Notably, 38 percent of line of business executives have serious concerns about IT’s security capabilities and controls. Additionally, security professionals are in high demand and hard to find.

ESG has identified three best practices among trusted data center leaders, and how Dell Technologies solutions and PowerEdge servers help organizations achieve and support those best practices in an ‘always-on’ landscape.

1. Prioritize market-leading BIOS/firmware security.


Data flows in and out of servers faster than ever before, and it is crucial for organizations to protect this data. That’s why organizations need to ensure BIOS and firmware are up to date. Organizations that prioritize BIOS/firmware security are 2x more likely to say that their security technology delivers higher than expected ROI.

And, it’s not just about BIOS improvements: it’s all the other features and functionality that helps ensure that technology continues to get better and more secure as you go along. Trusted data centers have increased functionality for security.

2. Refresh server infrastructure frequently.


ESG highlights the role hardware plays in the trusted data center and the benefits leaders who refresh their server infrastructure experience. For example, optimized infrastructure results in 41 percent reduction in downtime costs in a modern server environment. Organizations with modern server environments (servers that are less than 3 years old) save as much as $14.3M/year in avoided downtime versus organizations with legacy servers.

That’s because old hardware can’t take on new threats. In the mid-market space, companies may not be aware of new threats that are emerging or may not think they’re big enough to be considered a target. The reality is they could be, and it is even more important to make sure that data center hardware is secure and up to date.

Unfortunately, IT hardware doesn’t get better with time; the older it gets, the less reliable it becomes. It’s going to cost more to monitor and maintain older servers in head count, parts and resources needed to get those servers back up and running versus purchasing optimized hardware on a refresh cycle. It makes sense to refresh more quickly to make sure you’re getting all the latest technology. With more advanced systems, if you do experience issues, you have more failover capabilities.

3. Automate server management.


Highly automated organizations are 30 percent more likely to delivery highly reliable application and system uptime and reduce data loss events by 71 percent. Leaders are seeing tremendous value from automating their server management –they reported saving an average of 10.5 person-hours per week.

How are Dell EMC PowerEdge Servers built to support trusted data centers?

With so much at stake, security is one of the primary values that Dell builds into every single product we deliver. Our PowerEdge servers are engineered with security in mind for optimized infrastructure that lays the foundation to implement best practices.

Security is an evolving landscape and so is server management; “secure today” does not guarantee secure tomorrow. Fortunately, PowerEdge servers provide security that is built-in, not bolted on, and all models leverage the same management capabilities. Automation is essential and Dell is continually expanding remediation and threat detection through our OpenManage Application, including new capabilities around power management for reducing overall power consumption.

Dell Technologies infrastructure enables organizations to easily manage IT environments to solve their biggest challenges.

Tuesday, 28 April 2020

Dell Technologies Bolsters PC Security for Today’s Remote Workers

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certification, Dell EMC Exam Prep

Cybercriminals are opportunistic by nature, altering their attack methods to compromise endpoints and access critical data. This is never truer than during times of change such as now with the overnight shift to a global remote workforce. With cybercriminals ramping up activity, organizations need to protect their remote workers starting with the devices they use to get their jobs done.

One area attackers will target is the PC BIOS, the core system built deep inside the PC that controls critical operations like booting the PC and ensuring a secure configuration. To protect against BIOS attacks, organizations need built-in security solutions to protect endpoints. In response, Dell Technologies is introducing Dell SafeBIOS Events & Indicators of Attack (IoA) to further protect our commercial PCs, which are already the most secure in the industry. SafeBIOS Events & IoA uses behavior-based threat detection, at the BIOS level, to detect advanced endpoint threats.

With remote work increasing security gaps and the high economic pressure for businesses large and small to perform, Dell Technologies is arming customers with security solutions and best practices to better secure their PCs so they can stay focused on serving their end customers.

Dell SafeBIOS Events & Indicators of Attack


As workforces transition to remote work nearly overnight, organizations need to ensure their workers’ PCs are secure, starting below the operating system in the BIOS. Securing the BIOS is particularly critical because a compromised BIOS can potentially provide an attacker with access to all data on the endpoint, including high-value targets like credentials. In a worst-case scenario, attackers can leverage a compromised BIOS to move within an organization’s network and attack the broader IT infrastructure.

Organizations need the ability to detect when a malicious actor is on the move, altering BIOS configurations on endpoints as part of a larger attack strategy. SafeBIOS now provides the unique ability to generate Indicators of Attack on BIOS configurations, including changes and events that can signal an exploit. When BIOS configuration changes are detected that indicate a potential attack, security and IT teams are quickly alerted in their management consoles, allowing for swift isolation and remediation. SafeBIOS Events & IoA provides IT teams the visibility into BIOS configuration changes and analyzes these for potential threats – even during an ongoing attack.

Detection at this level allows organizations to respond to advanced threats quickly and successfully, interrupting the attack chain before it’s able to do more damage. The SafeBIOS Events & IoA utility is available globally today for download on Dell commercial PCs as part of the Dell Trusted Device solution.

Helping Organizations Securely Work from Home


As many organizations enable remote working, it is critical they have the security tools and knowledge to work safely and securely. That’s why Dell Technologies is offering existing customers flexible endpoint security solutions to help them:

◉ Better secure today’s new working model as quickly as possible with VMware Carbon Black who has eliminated endpoint limits until June 20, 2020.

◉ Pressure test remote work deployments with Secureworks’ accelerated vulnerability assessments, and get faster deployment and flexible payment options for Secureworks’ managed detection and response and incident response solutions.

◉ Securely deploy work-from-home devices with Dell Technologies who is offering temporary licenses for Dell Encryption until May 15, 2020.

Source: dellemc.com

Sunday, 26 April 2020

Modern, Mobile Missions Deploy Custom Data Protection Solutions

Dell EMC Study Material, Dell EMC Certifications, Dell EMC Exam Prep

Diverse factors, such as a growing mobile workforce, emerging technologies (artificial intelligence, Internet of Things, etc.), increased field operations, and hybrid IT environments are all driving federal networks to the edge. And, as the definition of the edge expands, so does the attack surface.

Mobile agencies are encountering new data protection challenges as they seek to ensure the right user can access the right data from the right device. This is especially important for defense and intelligence agencies, which require a heightened level of security to protect the hardware and software of devices – from hard drives in desktops and workstations, to laptops and mobile devices.

How can the federal government facilitate data access across both classified and unclassified networks without compromising the user, data, or device?

There is no simple answer. It often takes a unique approach to maximize mobility, while maintaining security. To meet the specific data protection requirements, our team at Dell Technologies OEM | Embedded & Edge Solutions collaborates with OEMs, federal agencies, and FSIs to design and engineer customized security solutions leveraging our Tier 1 infrastructure.

Data Protection Development


For example, our customer, Hypori, was looking for a partner to provide the infrastructure for their Virtual Mobile Infrastructure solution, which secures mobile devices, including laptops and personal cell phones, wherever they are. Our teams worked together to determine how to best engineer and optimize Dell Technologies hardware to accommodate Hypori’s software, to ensure they were able to offer a solution that met the security needs of Federal agencies, and to help them bring the solution to market.

As Hypori’s Chief Revenue Officer, Sebastian Shahvandi, put it, “Our software has plenty of requirements – the amount of memory, the speed of a processor, the amount of GPU, the storage necessary, etc. – so collaboration is key. Our engineers worked closely with the Dell Technologies engineers and developers to select the hardware products that would allow a seamless fit from a management server to our storage environment.”

The end result? The Hypori solution is a virtual smartphone that offers military-grade security, enabling federal employees to connect to multiple network classifications from a single device. The software ensures even the most highly confidential environments are secure, while allowing employees to access the networks through their own devices (BYOD). By decoupling work environments with personal environments, agencies can ensure the privacy for both the private device and the various networks. Employees no longer need to separate work phones and personal phones; instead, they can consolidate to one device.

From Theory to Practice


But, even the best solutions don’t always see market success. Just like any off-the-shelf IT solution, it’s important to address a customized solution from not only a technical perspective, but also a pricing perspective. My advice to OEMs and FSIs is to select partners that can do both, so you can create the best products at the best price for agencies.

Key Takeaway: Collaboration


The right customized data protection solution can help make mobile security a reality. For OEMs, agencies, and systems integrators, considering a customized approach similar to our partnership with Hypori – a deep, collaborative relationship is key.

It is important that OEMs, SIs, and agencies can work together on both the technological design and the market factors. How will you distribute the product? Are agencies able to afford the solution? Who will manage deployment? How long will the base hardware model be manufactured? How will hardware lifecycles impact the security of the solution? These answers will be critical to the technology’s successful implementation.

Saturday, 25 April 2020

Dell Technologies and Ververica: Analyzing Continuous Data Streams Across Industries

Every single entity in the digital world, be it an end user or a sensor device, continuously produces activity updates. People leave traces of their activity when they shop online or use any other online application. Machines then report on actions and statuses. Capturing such traces and reports in the form of data streams enables software applications that  digitally reconstruct the history of these entities to allow queries against the data and actionable insights. The same is true for applications that represent these streams in different formats.

To satisfy the demanding requirements of such applications, we developed the Dell EMC Streaming Data Platform (SDP). The solution ingests, stores and processes data continuously, mapping the software abstractions more naturally to those types of applications. The centerpiece of our SDP platform is the open source storage system called Pravega.

While ingesting and storing are critical functions of any data pipeline, a streaming data solution needs a stream processor that can benefit from the features that Pravega offers. Pravega exposes streams as storage primitive, enabling applications to ingest and consume data in a stream form. Pravega streams accommodate an unbounded amount of data while being both elastic and consistent.

Apache Flink, a unique framework in the space of data analytics, offers powerful functionality for processing both unbounded and bounded data sets. When used in conjunction with Pravega, Apache Flink can tail, or historically process, data using the same source, while providing end-to-end, exactly once semantics, and dynamically adapt to resource demands. The combination of Pravega and Apache Flink in the Streaming Data Platform raises the bar for stream processing platforms. It brings an unprecedented level of features that have long been needed to fulfill the requirements of existing and future applications.



Figure 1: Data pipelines with Pravega and Flink in SDP

The Future


Dell Technologies envisions a world in which data streams are ubiquitous and stream processing is the new norm for modern application development. Stream applications are already prevalent, but there’s a clear increase in demand and scale for systems that process stream data. We are excited about this future and the opportunity to build the systems that can help our customers’ applications process stream data efficiently and effectively.

The combination of Pravega and Flink in the Streaming Data Platform to compose data pipelines already provides unique possibilities. The partnership between Dell Technologies and Ververica will further enable us to build features for data pipelines that will help organizations simplify storage needs, create a foundation of unified data and innovate using an endless array of applications. We look forward to working together to deliver streaming data technology that continues to provide meaningful outcomes for our customers.

Source: dellemc.com

Thursday, 23 April 2020

Wyse ThinOS9: Delivering a Better Virtual Computing Experience

Dell EMC Study, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification, Dell EMC Exam Prep

As organizations continue to focus on creating a seamless solution to manage their endpoint devices, they are looking for solutions that deploy remote desktops and other applications across rapidly changing cloud-based environments.

Dell EMC Study, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification, Dell EMC Exam Prep
In the 20 years since the rollout of the original ThinOS, we’ve continued to innovate and incorporate solutions that will prepare customers for a new generation of advanced workplace transformation. That is why I am proud to announce the newest iteration of Dell Technologies’ most secure operating system on the market – ThinOS 9.

As the industry-leading firmware that powers Dell Technologies’ portfolio of thin client devices around the globe, ThinOS 9 will combine unmatched advantages in end-point security with easy deployment, optimized image size and a seamless central management suite. Customers will no longer have to pick between best-in-class security management or the most current core client software. With the investment of a new OS, we also continue our commitment to innovation and supporting customers’ rapid adoption of partner technologies, with a new management suite to support a seamless end-to-end solution for Dell engineered hardware.

Dell EMC Study, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certification, Dell EMC Exam Prep

With this update, ThinOS 9 will focus on the Citrix Workspaces, including advanced features such as browser content redirection and enlightened data transport. These advancements will bring streamlined efficiency to the workforce on our Wyse 3040 and 5070 thin client devices, as well as our recently released ThinOS 5470 mobile laptop and 24” All-in-One products.

Dell Technologies remains committed to providing the ultimate in security, intelligent management and optimized user experience without limiting your virtualization platform choices.

Tuesday, 21 April 2020

Building a Better Cloud Starts with Getting the Best People

Dell EMC Study Material, Dell EMC Tutorial and Material, Dell EMC Learning, Dell EMC Prep

As we close in on the end of our fiscal year, it’s amazing to see how much success we’ve had on the Dell Technologies Cloud team in our inaugural year. This isn’t a blog post to pat ourselves on the back or take the victory lap; it’s an acknowledgement that we still have a long way to go to achieve our goals, and that means growing the team.

In this new decade, what organizations require from their cloud solutions partners has substantially changed. No one is clamoring for yet another public cloud vendor to further splinter things. No, what organizations need is a partner who can drive consistency across these disparate environments. And who better than Dell Technologies, a leader not only in infrastructure solutions, but also virtualization and cloud software, with a strong commitment to building the solutions that organizations have come to rely on so heavily.

With that in mind, we launched Dell Technologies Cloud, the first set of solutions that show the path forward for Dell Technologies: Embracing Cloud Operating Models everywhere. This direction permeates every aspect of our business from engineering, to sales, to marketing. And because of this we are actively expanding our team, looking for the best and the brightest to join us on this rocketship.

Here is why you should believe in us, and why this is a great organization to come into:

1. Market Acceptance – You might be saying, “why should I select Dell Technologies versus one of the hyperscalers?” The truth is we are one of the largest and most trusted infrastructure and software companies in the world, and our launch of Dell Technologies Cloud has been praised by analysts and customers alike.

2. Ability to Make an Impact – Since the solution is relatively new to market and we’re rapidly transforming to meet new markets and new customers’ needs across the board, we have a startup-like culture, backed with the investments only the largest companies can make.

3. Smart and Pragmatic Leadership – We enter spaces with an obvious market need and where we hold a competitive advantage, versus getting into protracted and messy battles where either side’s success looks more like mutually assured destruction. Because of this, even during challenging economic conditions we’ve executed well and taken share.

4. Strong Culture – I’ve spent time in many different organizations but Dell’s commitment to work/life balance and ability to foster a collaborative “one team” culture is truly unique. I can’t stress enough how amazing it is to be able to ask for help and see folks willing to support as opposed to having them say “that’s not in my KPIs.”

5. Career Trajectory – From what I’ve observed in my almost three years here, and looking at this team specifically, we’re in a high growth area that can lead to great career growth. From my own experience I would say it’s been the most I’ve ever been rewarded for the work I’ve done.

Sunday, 19 April 2020

Being Deliberate with PowerEdge and AMD’s New High-Frequency CPUs

When we designed this generation of Dell EMC PowerEdge with 2nd Gen AMD EPYC, we made sure that our servers would be able to use the full performance of the EPYC processors. We wanted our customers to feel comfortable handling any workload. This involved creating a server that can handle high throughput, faster I/O, and versatile memory configurations. Our customers deserve the flexibility to decide their data center needs, including workloads that require high-frequency processors.

We are excited to offer all three of AMD’s new high-frequency (HF) CPUs in each of the PowerEdge with 2nd Gen AMD EPYC platforms (R6515, R7515, R6525, R7525, and C6525). Putting the new AMD HF CPUs into the PowerEdge platforms empowers relational databases, HPC, and virtualization workloads, while keeping your critical business systems up and running. PowerEdge with the new AMD HF chips are designed to handle workloads that require high throughput on a single thread.

These chips enable the performance of our servers particularly in database, hyper-converged infrastructure, and HPC. The R6525 holds a world record 2P Four-Node Benchmark Result on VMmark® 3.1 with VMware vSAN®. This world record equates to 47.4 percent higher VMmark 3.1 vSAN score.

Converged/Hyperconverged Infrastructure, High Performance Computing, PowerEdge, Data Center, Features, Servers

Even though these are specialized chips, we designed our servers to handle the full capabilities that AMD brings to the table. The chips require large L3 cache, high I/O and memory to run at maximum effectiveness. That is why PowerEdge’s flexible configurations with the options for direct connect NVMe and PCIe Gen 4.0 are a great match for handling any new virtualization and HCI requirements. With higher clock speed and fewer cores think high-frequency trading, custom single thread applications, and database workloads. Database applications especially benefit with the proper balance of cores and performance, lowering licensing costs on core counts.

Dell is also running a special promotion. When customers select PowerEdge servers with 2nd generation AMD EPYC processors the second half of their licenses are free. Windows Server 2019 OEM licensing costs are free beyond the first 32 cores for 1-socket servers and the first 64 cores for 2-socket servers. If you’re interested in learning more, please contact your sales representative or visit this link to explore Dell EMC’s data center solutions for Microsoft.

There is always the option to optimize your workloads and servers to run a more efficient data center, and these high-frequency, low core systems provide a unique configuration to help balance your data center costs. We are hearing many stories about customers implementing the first wave of PowerEdge with AMD in their data center. These new servers will help run your single-thread workloads effectively and move with new technology development.

The addition of these new AMD HF chips into the PowerEdge portfolio will help you stay lean and efficient and enable changes in your digital operating model.

Saturday, 18 April 2020

Modernizing Your IT Infrastructure with Dell EMC Data Protection and VMware Automation

Aligning data protection with business needs


A clear, unambiguous and widely-communicated data protection strategy is critical to the successful IT transformation of an organization, and the importance of aligning IT more closely with your current business needs cannot be emphasized enough. In fact, companies spend countless hours, and most of their tech budget, on keeping systems running. What happens when unplanned events make it nearly impossible to continue “business as usual”? When it comes to business continuity, perhaps the biggest risk most companies face is not having a disaster recovery plan or data protection strategy at all.

The creation of a data protection strategy may seem overwhelming at first, but it shouldn’t. It is not intended to be a precise, detailed set of instructions for every technology associated with your data. Rather it should be a clear, concise, high-level analysis of what technologies or applications your organization should prioritize, with guidance to lower the risks associated with the adoption of these technologies.

On top of that, there is an ever-increasing array of data protection strategy standards your business is expected to adhere to. According to ESG’s Data Protection Predictions for 2020, compliance mandates like GDPR and CCPA do a good job of reminding businesses that they had better be on top of their data. That said, it shouldn’t take the most recent breach at a business to remind us to protect our clients’, partners’, employees’, company’s and our own most valuable asset: data.

So where is all your data stored?


According to the 2020 Global Data Protection Index, most hybrid cloud approaches for application deployments will be based on VMware infrastructure. And when it comes to protecting complex environments, specifically large-scale VMware environments, there are some common pain points for those who have this responsibility. ESG recently took a closer look at this topic on behalf of Dell Technologies, with a focus on data protection.

ESG found that deployments of virtualization technology are overwhelmingly hybrid and they are equally split between on-premises and public cloud, on average. Overall, while IT organizations are very comfortable with virtualization technology, they struggle with skill sets and, operationally, with data protection, despite its importance.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Material, Dell EMC Data Protection, Dell EMC Prep

So why are organizations turning to a hybrid cloud environment, within a VMware environment?

◉ Fit-for-Purpose: Store your most sensitive data on dedicated hardware while simultaneously benefiting from the cost efficiency of the cloud, particularly with superior Dell EMC deduplication and on-demand scalability.

◉ Improve Security: Place your sensitive customer data on a dedicated server while running your front-end applications in the public cloud – creating a seamless, agile and secure environment.

◉ Cost Benefits: Reduce your TCO, match your cost patterns to your revenue/demand patterns and transition from a capital-intensive cost model to an OpEx-based model.

◉ Drive Innovation: Reduce business risk and take advantage of emerging cloud technologies, while still retaining core applications within your data centers.

◉ Seamless Integration with Data Protection: Add Dell EMC Data Protection solutions seamlessly to your hybrid cloud to provide organizations, with additional confidence and peace of mind that your data and investments are always protected.

There are some key VM data protection challenges that were also identified, with recoverability, performance and scalability being key areas of concern. VM recovery failures lead to negative operational efficiency and business impacts and can trigger a domino effect of compliance violations and customer exposures.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Material, Dell EMC Data Protection, Dell EMC Prep

VMware automation demands success


Beyond these challenges, data protection is the number one functional challenge (putting IT skills aside) to be resolved when it comes to VM environments. The complexity of configuring and operating data protection solutions can be challenging. Fortunately, not all solutions are created equal. Organizations can eliminate the traditional cost and complexity of data protection and disaster recovery with Dell EMC data protection and VMware automation. As more users adopt virtualization, the deep integration of Dell EMC Data Protection with the VMware user interface becomes more and more important. The Dell EMC portfolio of data protection products provides the best user experience, with automation and orchestration across the entire VMware stack.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Tutorial and Material, Dell EMC Data Protection, Dell EMC Prep

A large-scale VMware deployment story


One Dell Technologies global customer is building a three-to-five-year strategy that involves eliminating their data centers and moving 50 percent of infrastructure to colocations and 50 percent to the cloud with 50,000 VMs. Dell EMC Data Protection is the main partner for all colocations. The virtual machine (VM) automation capabilities are key to moving from the 40/40/20 (colo/cloud/premises) strategy they currently have to the 50/50/0 they want – all protected. They shared:

“We made a significant commitment with an investment in Dell Technologies over the next four years. Leadership has agreed that as they move toward their automation strategy, Dell Technologies will be exclusive to all data centers moving forward. This investment in PowerProtect DD, Data Protection Suite Enterprise and ECS aligns with their automation strategy. The consolidation efforts require that each component of the data center support the new strategy. The Dell EMC Data Protection team and the solutions that they provided gave us a consistent, reliable, repeatable experience.”

Modernizing your IT Infrastructure


Looking toward the future, as IT professionals look to modernize their backup solution, it should come as no surprise that organizations value efficiencies, ease of use and a simplified environment. Protecting large-scale VM deployments can be complex and there is still opportunity for improvement. The good news is that Dell EMC Data Protection offers some great solutions today to alleviate these challenges. What’s more, Dell EMC Data Protection and VMware are committed to delivering jointly engineered products, making it easier for our customers to maximize and protect their IT investment now and into the future. Together, we allow customers to harness the power of the cloud in an integrated manner, and no other vendor provides as much depth and breadth of enterprise data protection as Dell Technologies.


Thursday, 16 April 2020

To the Edge and Beyond: Resistance is Futile

Just like in Gene Roddenberry’s “Star Trek,” where the Borg subsumed all other races with the tag line “resistance is futile,” programmable fabrics are fast becoming the inevitable way of the future; and whether we consume them underneath a more traditional full stack router or more directly using modern day programmable fabric controllers, they are set to become a part of the networking fabric.

Programmable fabrics have been predicted to revolutionize the network space for quite some time; however, we’re now seeing several key drivers that point to this technology becoming a reality sooner rather than later.

There are four major use cases:

1. Next Generation Network Fabric: providing a traffic engineered, multi-tenanted and scalable network fabric that doesn’t suffer from the problems with today’s mixed overlay/underlay fabrics.

2. 5G and Edge: converging and offloading fixed and wireless services at the edge and in the process reducing latency, jitter, power and space and freeing up server cores for new edge applications.

3. Telemetry & Autonomous networks: using in-band telemetry to drive more consistent and intelligent forwarding decisions in the fabric.

4. Automation: enabling CI/CD pipelines to roll network changes into production more quickly, reliably and efficiently.

These drivers require a different architectural approach for the network fabric than has been required in the past. The traditional router/switch architecture running a full stack of routing protocols  on each network device will be instead moved more centrally to run atop a fabric controller for that domain. They will still be interacting with other existing domains using traditional routing protocols, but allowing the above use cases within their programmable fabric domain as shown in the following diagram (note: the NG-SDN Domain is the programmable fabric):

Dell EMC Study Material, Dell EMC Tutorial and Material, Dell EMC Prep, Dell EMC Certifications

While these programmable fabric domains may initially be introduced into smaller areas of the telecommunication provider’s network like the access or edge networks, over time they’ll likely be joined, and we may see a hierarchy of controllers used to control the end to end network similar to Google’s global SDN networks.

What is a “Programmable Fabric”?

A “Programmable Fabric” is the next generation of the Software Defined Network (i.e. ONF’s NG-SDN) in that it allows programmability all the way down to the forwarding pipeline. This programmability can then be used by network functions, allowing them to be fully disaggregated and virtualized and truly defined in software to deploy where and as needed in the network. Programmable Fabrics are the inevitable next step in the next generation of networks.

Dell Technologies is helping our Telecommunication Service Providers in the following ways:

◉ More cost efficient: Use of merchant silicon with strategic vendor support will help manage costs while scaling the network.

◉ More Open: An open and standards-based solution will allow the service provider to take advantage of industry innovations more easily while ensuring no vendor lock-in and will be key to driving down costs in the network.

◉ Multi-tenanted: The ability to define multiple different virtual networks in software on the same physical network for the purposes of multiple tenants or applications while keeping all the other characteristics required by a service provider (i.e. traffic engineering and telemetry).

◉ Traffic engineered: The ability to allow tenants or applications to signal priority and resourcing to the network and have the network provide that service (i.e. bandwidth, latency, jitter, specified path, redundancy characteristics).

◉ Telemetry: In-band telemetry that allows real time recording of packet metrics (i.e. latency, jitter, drops) on a per session basis is key to the operational support of the solution.

◉ Intelligent Control Loop Feedback: Machine learning can be applied to the telemetry being received from the network to provide intelligent control loop feedback to prevent or repair network failures and ensure the network is being operated correctly and efficiently.

◉ Continuous Integration and Continuous Deployment (CI/CD): Will be key to allowing service providers to integrate changes more easily and quickly into the network, allowing full operational readiness tests to be carried out before changes are deployed into a production network.

◉ Network Function offload: The ability to offload the user plane portion of network functions into the programmable fabric (BNG for fixed or UPF for wireless subscriber termination are examples of network functions that could be offloaded into the programmable fabric).

Source: dellemc.com

Wednesday, 15 April 2020

Data Science: Start Working Toward a New Career Today!

As data science becomes more and more essential for humankind, the increasing demand for proficient data scientists from different walks of the industry is natural.
big data, data science, Data Scientist, Data Science career, Advanced Analytics Specialist, data science certification, data science exam
Data science is one of the hottest career roads as of now, no doubt about it. As the world moves towards using more and more data and even at a faster rate, the importance of data science is only meant to improve over time. Data science is a complex field involving a galore of concepts, data scientist tools, techniques, approaches, methodologies, and much, much more.

Data Scientist Career Path

While you may have the skills required to become a data scientist straight out of college, it is not surprising for people to need some on the job training before they are off and running in their careers.

This training is often centered on the company’s particular programs and internal system, but it may include advanced analytics methods that are not taught in college.

The world of data science is an always-changing area, so people working in this field need to update their skills continually. They are continuously training to stay at the leading advantage of information and technology.

Data Scientist Jobs

Data scientists work in many diverse settings, but the majority of them will work in office-like settings that empower people to work together in teams, collaborate on projects, and communicate effectively. Much of the work may involve uploading numbers and data into the system or writing code for a business that will analyze the information.

The movement, character, and all-around pace of the work environment will mostly depend on the business and the industry you work in. You could work in a fast-paced work environment that features quick results, or you could work for an organization that values slow, methodical, complete progress.
You may find a work environment composed to promote creative thinking, or you could work in an office that is designed for performance and effectiveness; it depends on the type of data science you are doing and the nature of the business you go for.

Job Outlook

Anyone working in the field of data science can require the one-two punch of job security. Not only will they get an income well above the national average, but they can also expect their field to proceed to grow over the coming decade.

The demand for data scientists is well above the national average and 50% higher than that of software engineers (17%) and data analysts (21%). The number of data scientists increased over the last four years, and some even quote the growth at 300%.

As more and more businesses rely on trustworthy information for their decisions, the need for people who can not only compile the news, but can organize it, store it, interpret it, and discover trends, will be all the more critical. Data collection by businesses will proceed to grow, and data analysts should expect to be in high demand for years to come.

Data Scientist Salary

No matter what cause you look at, one thing is for sure: these professionals stand to earn a significant income. The best source for career salaries is the BLS, but unfortunately, they do not compile information for data scientists individually.

They do, however, have information on Advanced Analytics Specialist for Data Scientists, which covers what they call data mining, a skill that mirrors data science in many forms. According to the Salary Survey, people working as Advanced Analytics Specialist for Data Scientists earn an average income of $70,360 per year. These numbers seem to correlate with earnings numbers from other sources as well.

Any source you look at, you can see these old skills are in high need. If you have the skills, training, and know-how that it needs to become a data scientist, you will possibly earn a substantial income for the length of your career. There is more great news as well, as these professionals will be in high interest for the foreseeable future.

Pros and Cons

There are many advantages to becoming a data scientist, and it does not include all center on pay. The job is a different yet challenging career that allows a broad kind of daily tasks, and this variety is often cited as one of the main benefits.

As a data scientist, you may work for a wide range of companies, coming up with solutions and information associated with customer retainment, marketing, new products, or global business solutions. This means you become to engage in unique and exciting topics and subjects that give you a comprehensive perspective on the economy and world at large.
Benefits of Obtaining Your Dell EMC Advanced Analytics Specialist Certification
Just like in any career, there are some definite drawbacks. While the extreme variety of subjects provides you new challenges, it can also mean that you never get to dive into a specific topic fully. The technologies that you use will always be evolving, so you may find that the systems and software that you just mastered are suddenly obsolete.

Before you know it, you require to learn a whole new system. This can also lead to lots of confusion, as deciding which methods are the best for particular jobs is very tough.

Related Careers

Many careers are either branches of data science or extensions of the profession. One of the jobs you may move into during your career is a senior data scientist. These Data Scientists' profession will use their training and exceptional experience to create new and innovative approaches, lead data science teams, and develop new prototypes and algorithms for analyzing information and communicating conclusions.

Similar careers also involve Data Engineer, Data Architect, Data Analytics, Software Development, Computer Network Architects, Database Administrators, and Information Security Analysts. Any profession that uses computer technology, information analysis, or forecasting could be a potential home for anyone trained or experienced in big data.

Tuesday, 14 April 2020

The Case for Moving Analytics to the Edge

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Prep

There are compelling reasons for analyzing data at the network edge, where it is generated and captured, rather than sending all data to distant centers for analysis.

The rise of connected devices, Edge computing and the Internet of Things (IoT) has created an ever-growing flood of data that streams continuously into corporate and cloud data centers. To gain value from all of this data captured by sensors and IoT devices, organizations must put analytics tools to work.

Here is where a big question becomes even bigger. The process of transmitting data to distant data centers for analysis comes with challenges, from network latency and bandwidth limitations to security and compliance risks. Add to that the growing issue of the data that requires immediate analysis at the point of capture — for example, to enable an autonomous vehicle to respond instantly to a pedestrian stepping into the roadway. In use cases like these, the urgency for immediate responses is imperative.

Challenges like these build the case for analyzing data at the network edge, where the IoT devices are located and where intelligent systems can take actions based on the results of the analysis. In simple terms, we are talking about bringing the analytics to the data, rather than sending the data to the analytics. This is a point underscored in a Forrester study that examines how IoT deployments are driving analytics to the Edge.

“Many organizations have embraced internet-of-things (IoT) solutions to optimize operational processes, differentiate products and services, and enhance digital capabilities,” the Forrester study notes. “However, as companies deploy a diverse array of IoT projects and use cases — each with specific requirements — many encounter challenges with using centralized cloud-based and data center analytic strategies. To transform large streams of IoT data into insights in a fast, secure, and cost-effective way, organizations must revisit their IoT architecture, skills, and strategies.”

One of the keys to this shift is what Forrester refers to as “edge IoT for analytics” — a technique that takes analytic computations for certain Edge and IoT use cases out of the cloud or the enterprise data center and “moves it as close to the data sources as is necessary and feasible to enable real-time decisions, reduce costs, and mitigate security and compliance risks.”

A few definitions from Dell Technologies


◉ The Edge — exists wherever the digital world and physical world intersect and data is securely collected, generated and processed to create new value.

◉ The Internet of Things (IoT) — perhaps the purest possible expression of how the world is transforming from mechanical to digital. By digitally sensing the physical world, IoT transforms material things into streams of data, allowing people to sense and interact with them in entirely new ways to accelerate the pace of innovation.

◉ Edge computing — augments IoT by enhancing our ability to analyze IoT data and act on it in real time.

A broad range of use cases


The use cases for Edge and IoT span the gamut of operations for enterprises and public entities. McKinsey & Company illustrated this point in a report that identified 107 specific Edge use cases that enable “real-time decision making untethered from cloud computing’s latency.” The firm says these applications of Edge and IoT are not conceptual, noting that the firm identified 3,000 companies deploying these use cases today.

“By circumventing the need to access the cloud to make decisions, Edge computing provides real-time local data analysis to devices, which can include everything from remote mining equipment and autonomous vehicles to digital billboards, wearable health appliances, and more,” McKinsey says.

Let’s look at few examples of the way Edge and IoT are enabling better, faster decision-making and user services across a diverse range of use cases.

Digital cities

Around the world, cities are experiencing explosive growth. By 2050, nearly 70 percent of the world’s population will be concentrated in urban centers, up from 55 percent in 2018, according to forecasts from the United Nations. To continue to flourish, cities will need to adopt smarter approaches to managing population growth, safety, traffic, pollution, commerce, culture and economic growth.

Edge and IoT analytics can play a key role in better management of cities by making them more intelligent and self-reliant and enabling smarter citizen services. For example, with analytics at the Edge, cities can provide residents and businesses with real-time traffic information and navigation guidance that take into consideration weather, construction work and accident information fed from other Edge systems.

Here’s another example. Edge analytics can enable smart traffic management systems that automatically adjust the timing of traffic lights to help keep traffic flowing smoothly and reduce the pollution that comes with traffic jams. Even better, with analytics conducted at the Edge, cities can help ensure that services like these stay up and running even when network connections to cloud data centers go down.

Retail

In retail environments, Edge and IoT solutions enable merchants to combine data from many sources to enable seamless insight-driven customer experiences. With these solutions, retailers can deliver outstanding shopping experiences that are personalized from the moment customers walk through the door. The large volumes of data and the immediacy involved in this process makes this an inherently Edge and IoT solution.

Retailers are also making use of Edge and IoT solutions to analyze the retail environment to identify strengths and weaknesses, predict demand for bestselling products, and optimize inventory levels to help ensure that the right products are in the right place at the right time.

Edge and IoT solutions can also play an important role in surveillance of retail environments. In this use case, cameras capture video of shoppers and stream the footage to Edge gateways for analysis. With the right artificial intelligence tools in place, the Edge systems can automatically detect behaviors that are often associated with shoplifting, and then issue alerts to security personnel on the retail floor.

Healthcare and life sciences

With the proliferation of connected wearable medical devices, sensors, mobile apps and other new technologies, the movement of data in healthcare environments is changing in fundamental ways. At the same time, healthcare providers are using this data captured from the Edge and IoT for making greater use of AI and machine learning in diagnostic and patient care processes.

In this new world of healthcare, Edge and IoT solutions are essential. They help providers monitor the wellbeing of patients and devices remotely; keep track of patients, staff, equipment and inventory; and gather data for analysis. Collectively, these solutions help healthcare providers deliver smarter care, respond faster to health emergencies, and automate data collection for administrative mandates, such as those for compliance and reporting.

The growing importance of Edge and IoT solutions in healthcare is underscored by industry research that projects that the market for Edge and IoT in healthcare will exceed $500 billion by 2025 after expanding at an annual growth rate of nearly 20 percent over the forecast period.

Smart Manufacturing

Edge and IoT solutions are helping manufacturers improve product quality and factory yields by monitoring and controlling production lines in real time. With immediate feedback from smart systems, plant operators can spot emerging problems before they become actual problems, while continually optimizing processes.

Smart manufacturers, for example, are using AI and Edge-computing systems in conjunction with IoT data gathered by sensors to predict and avoid machine failure. The goal is to use predictive maintenance to prevent issues, resolve problems quickly and minimize operational downtime.

Here’s another example. Smart manufacturers are now leveraging the combination of sensor data, machine learning and advanced image recognition capabilities to automate the visual inspection and fault detection of products on the manufacturing lines. These intelligent systems trigger the automatic rejection of defective products from the production line.

Key takeaways


Today, there’s a tremendous amount of momentum for IoT and Edge computing, a point underscored by industry research. The research firm Strategy Analytics, for example, predicts that by 2025, nearly 60 percent of data will be processed in some form by Edge computing. As findings like these suggest, Edge computing and IoT-based solutions are the future — as well as the present.

So, does the rise of Edge analytics mean the cloud will go away? While some industry observers are making that case, at Dell Technologies, we disagree with this view. We believe that enterprises will continue to use public and hybrid clouds for storing large amounts of data, which can be used for data analytics and the training of machine learning models, while making use of Edge and IoT solutions, even Edge clouds, for applications that require more immediate responses from automated systems.

Saturday, 11 April 2020

Are you an OEM Designing Architecture for Latency-Sensitive Workloads?

artificial intelligence, Converged/Hyperconverged Infrastructure, PowerEdge, High Performance Computing, Opinions, Servers

Don’t Drink from the Fire Hose!


I love the expression “drinking from the fire hose.” It paints such a vivid picture of an overwhelmed person totally in over their heads, inundated with information. You hear the term a lot in the business world, usually when somebody starts a new role or project and is under pressure to get up to speed as quickly as possible.

Information overload


I think that at some stage, we’ve all experienced that feeling when there’s just too much stuff being hurtled at us from all angles. Our brains cannot process the information fast enough to make sense of it all. The result? Apart from stress, it makes decision making tough and can impact our productivity.

As it’s not usually possible to switch off or turn down the firehose, we deal with it by developing good management strategies. For example, we prioritize, multi-task, delegate and attempt to optimize our learning so that we can get on top of work with minimum delay. In business, we develop organizational structures and functional units to manage workloads in parallel. In this way, we divide and conquer, breaking down big projects into manageable chunks to ensure that we deliver on our goals.

Same story in enterprise infrastructure


Guess what? It’s the same story in the world of IT Enterprise Infrastructure. We design systems that constantly need to process more and more information but in less time. When these systems become exposed to huge sets of data that need to be processed or manipulated in some way, bottlenecks are also likely to form. In turn, this can hinder the performance and output of any applications that are hosted on the hardware infrastructure.

Parallelization rules


Likewise, we have management strategies in place to deal with this potential overload. In the same way that different teams do project work in tandem, parallelization is important in software development. For example, applications are designed to be multi-threaded so as to maximize the work a single system can do at any one given time.

In recent years, parallel processing hardware such as GPUs (Graphics Processing Unit) and FPGAs (Field Programmable Gate Array) goes even further in addressing the need for parallelization. The result? Many more small processing tasks are handled efficiently by many more cores or logic gates simultaneously, often with exponential speed increases versus standard CPU architecture.

Accelerator technology


Where can you go to help manage your enterprise infrastructure overload? At Dell Technologies, we offer a broad portfolio of platforms that can integrate accelerator technology and support heavy workloads, where there is a critical need to process data as quickly as possible in parallel.

Take the Dell PowerEdge C4140, a 1U server utilizing two second generation Intel® Xeon® Scalable processors, with up to 24 cores per processor. As part of our special patented design, the server is also designed to house up to four full-length double-width PCIe accelerators at the front of the chassis.

This location allows for superior thermal performance with maximized airflow, allowing these four cards to work as hard as possible and deliver return on your investment. As a result, this platform is ideally suited to machine learning training/inference, HPC applications and other large-scale, parallel processing workloads.

artificial intelligence, Converged/Hyperconverged Infrastructure, PowerEdge, High Performance Computing, Opinions, Servers,

“Bump in the wire” traffic processing


Of course, there are also applications in this HPC/AI world that are heavily latency dependent, where data needs to be processed as close to wire speed as software architects can achieve. For example, picture “bump in the wire” type network traffic processing or systems that are processing financial transactions.

In these scenarios, latency matters big time. Ingesting data into these accelerators in the most efficient way possible is an important way to control the “firehose.” When considering the entire application structure, it’s important to remember that as latency is a cumulative issue, minimizing it at the lower layers of the stack makes sense.

Specially redesigned for OEM customers


With all of this in mind, Dell Technologies OEM | Embedded & Edge Solutions has now modified the existing, high-performing Dell PowerEdge C4140 platform to specifically meet the needs of OEM customers dealing with latency-sensitive workloads.

High-bandwidth IO ports on the FPGA accelerators, located at the front of the unit, operate as the main source of data ingest versus relying on the server’s own IO and CPUs to manage the transmission. The result? You can now design solutions with significantly reduced latency. Remember, this design is unique – there are no other Tier 1 providers in the industry with a similar product.

Reduce bottlenecks and accelerate processing


The good news is that this innovative architecture significantly reduces bottlenecks and provides accelerated processing of streaming/dynamic data. And of course, our OEM customers can customize and rebrand the platform to build dedicated appliance solutions for customers across multiple verticals, for example, Finance, Energy, Healthcare/Life Sciences, Telecom and the Defense sector to name a few.

Multi-disciplined engineering team


And don’t forget that if you’re designing applications and building infrastructure for latency sensitive HPC/AI/Network processing workloads, a multi-disciplined engineering group is ready to help with your design so that you can spend more time innovating and managing your business.

The bottom line is that there’s no need to feel alone or drink from the firehose! We’re here to help you accelerate your processing power with the OEM-ready Dell PowerEdge C4140.

Source: dellemc.com

Thursday, 9 April 2020

Edge Is The New Core

Let’s face it, the debate is over when it comes to whether telcos should create innovative and localized services in the core or at the edge. If past performance is an indicator of future success, it’s clear telcos need a radical transformation of their self-defined legacy network hierarchies. This allows them to get away from rigid network boundaries and focus on net-new opportunities to drive growth and revenues. To create these new opportunities, they must go closer to the edge — where the users are — and have a flexible and nimble architectural approach. While this is a new model from a telco perspective,  it is business as usual for cloud providers, their real competition.

Looking at the 5G vision from the perspective of International Mobile Telecommunications (IMT-2020) requirements, there are three key network focus areas:

◉ Throughput – to accommodate eMBB (enhanced Mobile Broadband) use cases

◉ Latency – to accommodate uRLLC (ultra-reliable low-latency communication) use cases

◉ Massive Connections – to accommodate mMTC (massive machine type communications) carrier IoT

These key areas mandate a different network infrastructure, meaning a shared common infrastructure focusing on the route to market and how the telco chooses to monetize the specific use cases tied to it. The monetization can target B2C use cases or B2B with a vertical focus. The key theme emerging here is that the network infrastructure is designed to drive business outcomes and telcos must look at it through the lens of an end-to-end transformation incorporating the following principles:

◉ Architectural adjustments – Essential Network functions migrating from core to the edge. The idea is to leave the control plane functions in the centralized locations and move the user plane/throughput-intensive functions to the edge. These simple architectural adjustments shave off a good amount of latency and roundtrip traffic time. Having a common cloud fabric that allows the workload positioning on-demand will only increase reliance on containers.

◉ Network Infrastructure Sharing – Focus is on common x86-based compute infrastructure that drives multiple workloads. By decoupling network functions from the processing hardware and a variety of workload software from different vendors, they can share the same compute infrastructure. The workloads can be real-time and bandwidth constrained, such as vRAN or AR/VR, but at the same time able to accommodate a volume of connections such as narrow band IoT (NB IoT) and LTE-M scenarios for industrial IoT use cases. These workloads can coexist on the same shared compute infrastructure and are further segmented, or sliced, to guarantee SLAs and KPIs in a managed services scenario led by the telco.

◉ SLA-based services focus – Network slicing concept is to create SLA-based service opportunities as network infrastructure becomes a shared resource. While B2B scenarios largely look like a telco managed service, it is an ecosystem or partnership-based service model in the end. Telcos may choose to partner with MSPs or CSPs to create a bundled service offering and therefore the network itself must be flexible to provide this capability. After all, network slicing is a borrowed enterprise concept where it is widely known as multi-tenancy. The idea is very much the same, which is to provide guaranteed services without disruption.

Dell Technologies’ vision is to transform the network through workload virtualization and deliver greater flexibility via software-defined networking and open compute infrastructure with telco-grade servers to meet a variety of deployment options. In addition to these telco-grade, highly reliable servers, platform partnerships with VMware and Red Hat facilitate workload orchestration and provisioning to onboard new use cases quickly. Dell Technologies has a dual-prong strategy related to the 5G Transformation, which is essential for the Telecom Transformation and is integral for the Telecom Digital Transformation.

As workloads demand localized content and traffic flows allow offloading, aggregation and distribution of real-time and cached content, use cases such as vRAN, AR/VR, and AI/ML require a common compute infrastructure with plenty of acceleration methods. This is an area where Dell Technologies is actively collaborating with NVIDIA to bring GPU-based acceleration at the edge.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Cert Exam, Dell EMC 5G

NVIDIA is building a programmable, cloud-native and scalable edge compute platform stack for telcos. The NVIDIA EGX platform is designed to deliver new AI services to consumers and businesses while supporting the deployment of 5G, virtual RAN infrastructure. This flexibility enables the convergence of B2C/B2B applications and network services in a single commercial off-the-shelf platform.

NVIDIA EGX enables a new GPU class for accelerated AI computing designed to aggregate and analyze continuous streams of data at the network edge. NVIDIA EGX includes an optimized software stack for Dell infrastructure that features NVIDIA drivers, a Kubernetes plug-in, a container runtime, and containerized AI frameworks and applications, including NVIDIA TensorRT, TensorRT Inference Server, DeepStream SDK and others.

Through this collaboration, Dell Technologies and NVIDIA are bringing the most advanced telecom transformation to market and helping our customers to be #5GReadyNow. In fact, Dell Technologies and NVIDIA have launched a Telco focused solution on SwiftStack.

Dell Technologies and NVIDIA are collaborating to transform the telecom market and help our customers to be 5GReadyNow.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Cert Exam, Dell EMC 5G

Source: dellemc.com

Tuesday, 7 April 2020

NVIDIA is on a Roll, and at Dell Technologies, We’re In

Dell EMC Study Materials, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Prep

Drawing on the power of their close relationship, Dell Technologies and NVIDIA are streamlining the path to GPU computing.


Thousands of people will be diving into some of the latest and greatest technologies for artificial intelligence and deep learning during NVIDIA’s GTC Digital conference. The online event provides developers, engineers, researchers and other participants with training, insights and direct access to experts in all things related to GPU computing.

The virtual crowd at GTC Digital will include many from Dell Technologies, including data scientists, software developers, solution architects and other experts in the application of the technologies for AI, data analytics and high performance computing.

At Dell Technologies, we’re investing heavily in servers and solutions that incorporate leading-edge GPUs and software from NVIDIA. In this post, I will offer glimpses of some of the exciting things that the Dell Technologies team is working on with our colleagues at NVIDIA.

NVIDIA EGX servers


Dell Technologies was among the first server companies to work with NVIDIA to certify systems for the NVIDIA EGX platform for edge computing. This cloud-native software allows organizations to harness streaming data from factory floors, manufacturing inspection lines, city streets and more to securely deliver next-generation AI, IoT and 5G-based services at scale and with low latency.

Early adopters of EGX edge — which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices — include such global powerhouses as Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

NGC Container Registry


Providing fast access to performance-optimized software, NGC is NVIDIA’s hub for GPU-accelerated AI, ML, and HPC applications containers, software development kits and tools. NGC hosts containers for top AI and data science software, HPC applications and data analytics applications. These containers make it easier to take advantage of NVIDIA GPUs on-premises and in the cloud. Each is fully optimized and works across a wide variety of Dell Technologies solutions.

NGC also hosts pre-trained models to help data scientists build high-accuracy models faster, and offers industry-specific software development kits that simplify developing end-to-end AI solutions. By taking care of the plumbing, NGC enables people to focus on building lean models, producing optimal solutions and gathering faster insights.

NGC-Ready systems


Dell Technologies offers NGC-Ready systems for data centers and edge deployments. These systems have passed an extensive suite of tests that validate their ability to deliver high performance when running containers from NGC.

The bar here is pretty high. NGC-Ready system validation includes tests of:

◉ Single- and multi-GPU deep learning training using TensorFlow, PyTorch and NVIDIA DeepStream Transfer Learning Toolkit
◉ High-volume, low-latency inference using NVIDIA TensorRT, TensorRT Inference Server and DeepStream
◉ Data science using RAPIDS
◉ Application development using the NVIDIA CUDA Toolkit

Along with passing the NGC-Ready tests, these Dell EMC servers have also demonstrated the ability to support the NVIDIA EGX software platform that uses the industry standards of Trusted Platform Module (TPM) for hardware-based key management and Intelligent Platform Management Interface (IPMI) for remote systems management. NGC-Ready systems aim to create the best experience when it comes to developing and deploying AI software from NGC.

Support for vComputeServer


NVIDIA Virtual Compute Server (vComputeServer) with NGC containers brings GPU virtualization to AI, deep learning and data science for improved security, utilization and manageability. And even better, the software is supported on major hypervisor virtualization platforms, including VMware vSphere. This means your IT team can now use the same management tools across the rest of your data center.

Today, vComputeServer is available in Dell EMC PowerEdge servers. And if you’re not sure which PowerEdge server best fits your accelerated workload needs, we can offer some help in the form of an eBook, “Turbocharge Your Applications.” It includes good/better/best options to help you find your ideal solution for various workloads.

GPU-accelerated servers


Dell Technologies offers a variety of NVIDIA GPUs in the Dell EMC PowerEdge server family. The accelerator optimized Dell EMC PowerEdge C4140 server, for example, offers a choice of up to eight NVIDIA GPUs in configurations with up to four V100 GPUs, or eight V100 GPUs using NVIDIA’s MaxQ setting (150W each). And this is just one of many PowerEdge servers available with NVIDIA GPU accelerators. For a broader and more detailed overview, check out the “Server accelerators” brochure.

Dell EMC Isilon with NVIDIA DGX reference architecture


In another collaboration, NVIDIA and Dell Technologies have partnered to deliver Dell EMC Isilon with NVIDIA DGX reference architectures. These powerful turnkey solutions make it easy to design, deploy and support AI at scale.

Together, Dell EMC Isilon F800 All-Flash Scale-Out Network-Attached Storage (NAS) and NVIDIA DGX systems address deep learning workloads, effectively reducing the time needed for training multi-petabyte datasets and testing analytical models on AI platforms. This integrated approach to infrastructure simplifies and accelerates the deployment of enterprise-grade AI initiatives within the data center.

Supercomputing collaboration


Dell Technologies is collaborating with NVIDIA, along with Mellanox and Bright Computing, on the Rattler supercomputer in our HPC & AI Innovation Lab in Austin, Texas. The Rattler cluster is designed to showcase extreme scalability by leveraging GPUs with NVIDIA NVLink high-speed interconnect technology. Rattler not only accelerates traffic between GPUs inside servers, but also between servers with Mellanox interconnects. Teams use this system for application‑specific benchmarking and workload characterizations.

The bottom line


NVIDIA and Dell Technologies are delivering some great hardware and software to enable and accelerate AI and deep learning workloads. We’re working closely to help our customers capitalize on all the good things that we are doing together.

Source: dellemc.com

Saturday, 4 April 2020

Kinetic Infrastructure is the Path to Full Composability

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Cert Exam, Dell EMC Prep

Well, the lapse in time since my last blog is because we’ve been flat out BUSY in our Dell EMC Server & Infrastructure Systems organization bringing this vision closer to reality. And, BTW, along the journey, we’ve become the No. 1 server vendor worldwide. For that, I want to thank our customers and partners – you are the ones that inspire us to create products and solutions to solve your problems.

You see, true composability requires the underlying data center infrastructure to be able to be disaggregated and broken down into its constituent parts. So resources like storage class memory, accelerators (GPUs, FPGAs, ASICs…) and the network have to be available to be put together in an ad hoc manner –‘composed’ on-the-fly based on what the software applications require at the time. This is simply not possible today, mainly due to the biggest challenge being around memory-centric devices that need to be treated as first-class citizens behind the MMU and not the IOMMU in modern processors.

Today, there is some level of ‘composability.’  For example:

◉ We can group servers logically and manage them as a single resource. This is what Converged Infrastructure delivers, and Dell EMC is a leader in this space today with almost 43 percent of the market. The Dell EMC DSS 9000 also supports this kind of capability at Rack Scale, utilizing a singular platform framework for compute, storage and network resources.

◉ Similarly, we can group storage into pools either via software-defined storage solutions or traditionally in a SAN.

◉ Finally, we can manage and compose the network logically in a solution like VMware NSX or through other software-defined networking solutions.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Cert Exam, Dell EMC Prep

What the industry cannot do today to achieve composability, however, is free up system memory or any of the other memory centric resources (GPUs, FPGA, and Storage Class Memory) attached to the CPU inside the server. This means composability has a long way to go, as we are still bound by the restrictions of sheet metal – almost everything inside a server (with the exception of traditional disk or flash storage) is a trapped resource.

None of the above is new. So what has changed? Well, while we are still waiting for true composability, there have been some interesting advances in the industry that make me believe this utopia is not far off.  It helps that we are not just in the “hope and pray” mode for this utopia but are actually working in the industry to make it happen.

Enter the Gen-Z Core Specification 1.0


In February of this year, the Gen-Z Consortium announced the 1.0 core specification for the Gen-Z Interconnect. This is a big deal because now we have an industry standard that can lead the way towards freeing up those trapped resources. Gen-Z delivers an interconnect to provide high-speed low latency access to data and devices outside of the bounds of the CPU. (Full Disclosure:  Dell EMC is not just an active member of the Gen-Z Consortium but President of the group).

This is an important milestone because now silicon providers and other developers can begin to build this specification into their solutions, which will in turn enable OEMs like Dell EMC and others to build solutions that take advantage of this interconnect. Once achieved, the hardware becomes free of the restrictions of sheet metal, and we can free up those trapped resources. This is especially important because I believe that the need for a solution like this is rapidly overcoming the ‘hype’ around the discussion of composability.

The problem we are trying to solve with a solution like this is Digital Transformation. The reality is Digital Transformation is disruptive. As a matter of fact, CEOs this year are actually pursuing disruption in their industry sectors, rather than waiting for their competitors to do it. We are definitely in the disrupt or be disrupted era. As a result, IT leaders are being faced with an accelerating pace of change and the need to find new ways to accommodate and take advantage of new technology. This Digital Transformation is driving rapid adoption of many new technologies to meet business needs in new ways – hence technologies like Storage Class Memory, FGPAs, and GPUs are finding their way into Server infrastructure today.  Which is why a modern modular infrastructure is critical today as we continue our journey to true composable infrastructure.

Introducing Kinetic Infrastructure


Many vendors have advocated a “composable solution” but given that key memory centric resources are still trapped inside the server, they can only deliver a partial solution.  This is why we believe true composability is kinetic.

Kinetic infrastructure is a term we are introducing that defines true composability. It brings the benefits of a modular design but extends the flexibility of configuration down to the individual storage device and, in the future, all the way to memory centric devices (DRAM, storage class memory, GPUs, FPGAs…). A kinetic infrastructure enables the ability to assign the right resources for the right workload and to change dynamically as business needs change.

A kinetic infrastructure releases the potential of your organization. Potential energy does nothing for the business. A kinetic infrastructure puts capital investment back in motion to deliver improved productivity and better ROI.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Cert Exam, Dell EMC Prep

Source: dellemc.com