Artificial Intelligence (AI) applied in the business world closely correlates to AI-enabled operations and applications which help offload workers to refocus on higher value projects and deliverables. In essence, AI enables businesses to accelerate problem solving, decision making, perception and communications within the business as well as to its workforce and growing customer base.
In the enterprise, AI is rapidly gaining traction and delivering results for many companies, as well as supporting business transformation goals. Deploying the right foundation used to be relegated to specialized hardware and compute, but in recent years has enabled more ways to consume and install cost-effective solutions that tackle AI, machine and deep learning, on two key fronts; training and inferencing.
Read More: DES-6332: Dell EMC VxRail Specialist Exam for Systems Administrator
And now, for many companies, an AI-ready infrastructure is not just in the data center or IT closet. You can now leverage AI, and particularly AI inferencing, at the edge.
Why invest in inferencing
Already, businesses around the globe have jumped on the AI bandwagon, while many other businesses are still coming to terms with what to do with AI within their organization, as well as productizing AI as solutions for their customers. With more AI-enabled applications in use, and businesses already having deployed trained AI models, the next focus is testing those models with real-world customer input. Contact center automation, chatbots and customer-response systems must now deliver a smooth, scalable customer experience, based on the questions from customers. Inferencing plays a large role in “inferring” results based on the data input of the trained model and quickly providing the feedback to a researcher, process stakeholder or customer.
With more technology choices and businesses pushing their requirements to use AI in other areas of their vision, AI is an evolving landscape which can be tailored to different needs and in many environments.
At the edge, other machine-driven data collections, real-time manufacturing and IoT inputs require active decision-making to visualize the data generated in megabytes to gigabytes per second, or faster. Again, the operator needs to infer enough from the data to take the next step.
Finding a home for AI
As AI continues to deliver real results for a business, you now have to consider where and how AI can be deployed in your infrastructure. What AI projects are being kicked off in the organization? What AI solution is right for you? How can AI inferencing at the edge bring increased capabilities for the organization? Does your infrastructure allow you to democratize AI inferencing across all users and environments? What are the best compute and graphics solutions for AI?
Delivering a compute platform that supports AI inferencing starts with a performance-optimized infrastructure, flexible enough to support a range of needs across an organization and its users. Traditionally, the server CPU has led the role of inferencing for these solutions. However, with the server driving applications, as well as performing inferencing, there has been a growing need to offload the CPU from AI and inferencing tasks with NVIDIA Ampere architecture-based accelerators.
The Dell EMC PowerEdge server portfolio enables enterprises and businesses to harness the power of AI and significantly improve inferencing with NVIDIA AI Solutions. The flagship NVIDIA-Certified PowerEdge R750xa server, for example, is a superb, purpose-built platform for AI supporting NVIDIA training accelerators such as the NVIDIA A100 and inferencing accelerators including NVIDIA A30 and future support for the newest member of the inferencing family, the NVIDIA A2 GPU.
With this portfolio and NVIDIA AI Enterprise, an end-to-end software suite for the development and deployment of AI, customers can readily support their AI inferencing projects with an extensive portfolio of NVIDIA-Certified PowerEdge servers available now, along with VMware vSphere integration to deliver and democratize AI foundations across all their users, global workers, data center and edge deployments.
AI now helps you get back to work
Furthermore, with the planned support of NVIDIA A2 into the PowerEdge portfolio, customers will be able to support inferencing needs in the data center and across edge deployments to improve outcomes at those sites and enable faster processing of incoming data. While the A30 serves as the performance inference choice for businesses, the A2 enables customers to deploy inferencing at the edge, meeting lower power requirements and a smaller form factor, which are ideal for servers in edge environments. Dell already deploys several solutions to help customers simplify their edge, across a broad set of industries and use cases that require AI. PowerEdge servers including the XE2420 (which already supports the A30) and the new XR11/XR12 servers help customers at the edge to generate insights on a resilient, cyber-secure infrastructure.
With the tremendous acceleration horsepower offered by today’s GPUs and CPUs, IT has resources to intelligently power multiple workloads and users (via VDI, for example) from their graphics-accelerated servers, while enabling their workers access to graphics applications previously not available to them. Dell PowerEdge servers now support NVIDIA A16 GPUs, enabling a richer and more collaborative experience for 2x more users, as well as boosting streaming media performance. The A16 was designed for accelerated VDI and is supported on Dell PowerEdge servers, starting with the PowerEdge R750xa, R750, R7525, and R7515.
AI-powered infrastructures can help businesses of all sizes develop and drive more value from AI projects, while offloading from the server CPU, to purpose-built GPUs like NVIDIA A2. This technology enables your teams to get back to work on higher value initiatives.
Source: delltechnologies.com
0 comments:
Post a Comment