Saturday, 30 December 2023

Artificial Intelligence is Accelerating the Need for Liquid Cooling

Artificial intelligence (AI) is revolutionizing the workflow of many companies. It allows for more innovation in almost every field by processing and interpreting a huge amount of data in real time, improving decision-making and problem-solving and leading to more accurate predictive analytics to forecast trends and outcomes. All of this computational and accelerator-based innovation requires greater power consumption and presents challenges for data center cooling.

Over the past ten years, significant innovations in CPU design have enlarged core counts and increased frequency. As a result, CPU Thermal Design Power (TDP) has nearly doubled in just a few processor generations and is expected to continue to increase over time. The emergence of power-hungry high-performance general-purpose GPUs for workloads, such as AI and machine learning (ML), work to capitalize the processing capabilities. However, the heat byproduct is becoming a challenge for rack and data center deployments. Similar to CPUs, the growth of power consumption of GPUs has rapidly increased. For example, while the power of an NVIDIA A100 GPU in 2021 was 300W, the latest NVIDIA H100 GPUs draws up to 700W. Further enhancements can see GPUs’ power consumption topping 1000W within the next three years.

Artificial Intelligence is Accelerating the Need for Liquid Cooling
Figure 1 CPU power consumption history.

The cooling challenges presented by these powerful processors are being met by innovation beyond the silicon. Cooling components, such as fans and heat sinks, are getting more efficient with each generation. Dell Technologies intelligent system management, iDRAC, ensures adequate cooling with minimal fan usage by constantly monitoring sensors throughout the server and learning from its environment. These, and other features, are part of Dell’s Smart Cooling technology, ensuring the fraction of server power spent on cooling can decrease even as total power demands are increasing.

A key aspect of Dell’s Smart Cooling technology is Direct Liquid Cooling (DLC), where a liquid coolant is pumped to hot components within each server. Dell is on its third generation of DLC server platforms. This journey started in the HPC space in 2018, and we now offer 12 DLC-enabled platforms with our 16th generation servers because DLC is not just for HPC anymore. Customers choose DLC-enabled servers to lower their cooling costs, save space and use more of their limited data center power for compute rather than cooling.

Liquid Cooling Basics Explained


Liquid cooling is the thermal extraction method that utilizes liquid coolant to remove heat from some or all of the components inside a server. In Dell’s solution, we use Direct Liquid Cooling, often abbreviated to DLC. Looking at Dell’s DLC3000 and DLC7000 solutions, a coolant distribution unit CDU circulates liquid around a coolant loop to collect and convey heat away from the server. Then via heat exchanger, facility-chilled water transports the heat out of the data center. PowerEdge servers use specially designed liquid cooled cold plates, which are in direct contact with the servers’ CPUs and GPUs.

Artificial Intelligence is Accelerating the Need for Liquid Cooling
Figure 2: The components of a typical DLC solution.

Six Key Benefits of Direct Liquid Cooling


Given liquid cooling is much more efficient at collecting and moving heat compared to air cooling, liquid holds four times more heat than air. DLC offers numerous advantages over traditional air-cooling methods, making it an attractive option for modern data centers.

1. Greater computational density. DLC allows for higher server density in data centers because there is no longer the need to design space for the required airflow. For example, Dell DLC allows customers to deploy 58% more CPU cores using PowerEdge C6620 per rack than air-cooled C6620.1
2. Uniform cooling. Liquid cooling eliminates hot spots and ensures even distribution of cooling across servers.
3. Improved server performance. Maintaining servers at supported temperatures through liquid cooling can lead to improved performance and even lower failure rates. Overheating can lead to the CPU temporarily applying thermal throttling, which reduces server performance.
4. Energy savings. By reducing the need for energy-intensive air conditioning systems and high-speed fans, direct liquid cooling can lead to energy savings and reduced operational costs with lower power usage effectiveness ratio (PUE).
5. Increased sustainability. Lower power can mean a reduced carbon footprint.
6. Noise reduction. As a by-product, Direct Liquid Cooling systems are generally quieter than air cooling systems because they require server fans to run at much lower speeds, and the data center air moving infrastructure has far less work.

Artificial Intelligence is Accelerating the Need for Liquid Cooling
Figure 3 Hardware components of Dell DLC solution.

Dell customers can now benefit from a new pre-integrated DLC 3000 or 7000 rack solution for PowerEdge Servers that eliminates the complexity and risk associated with correctly selecting and installing liquid cooling. The DLC3000 rack solution is an ideal solution for customers looking to deploy up to five racks or looking to pilot their first DLC solution. It includes a rack, rack manifold to distribute coolant to servers and in-rack CDU ready to accept factory-built Dell DLC-enabled rack or modular servers. The rack with integrated DLC3000 cooling solution is built, tested and then delivered to the customer’s data center floor, where the Dell professional services team connects the rack to facility-chilled water supply and ensures full operation. Finally, Dell ProSupport maintenance and warranty coverage backs everything in the rack to make the whole experience as simple as possible.

Customers can monitor and manage server power plus thermal data with Dell OpenManage Enterprise Power Manager. Power Manager collects information supplied by each server’s iDRACs, and it can be reported as an individual server, a rack, a row or the entire data center. Organizations can utilize this data to review server power efficiency and locate thermal anomalies such as hotspots. Power Manager also offers additional features including power capping and carbon emission calculation. It also has built-in automation to respond to DLC leaks and thermal events.

As current trends continue the growth of processor CPU and GPU power to support the most demanding workloads, so too will the use of liquid cooling expand to play an important role in data centers. While Direct Liquid Cooling offers many benefits, it is not is not without its challenges. Implementing liquid cooling requires planning and additional installation. We have helped many customers along this journey to reduce their data center PUE. PhonePe, for example saw a drop in PUE ratio from 1.6 to 1.3.  Dell Technologies can support your DLC strategy—wherever you are in your journey.

Source: dell.com

Thursday, 28 December 2023

This Ax Stinks – Accelerating AI Adoption in Life Sciences R&D

This Ax Stinks – Accelerating AI Adoption in Life Sciences R&D

In the world of life sciences research and development, the potential benefits of artificial intelligence (AI) are tantalizingly close. Yet, many research organizations have been slow to embrace the technological revolution in its entirety. Specifically, while they might leverage AI for part of their process (typically “in model” or predictive AI), they hesitate to think about leveraging improved techniques to review existing data or generate potential new avenues of research. The best way to understand the problem is by using an analogy of an ax versus a chainsaw.

The “Ax vs. Chainsaw” Analogy

Imagine a team of researchers tasked with chopping down trees, each equipped with an ax. Over time, they develop their own skills and processes to efficiently tackle the job. Now, envision someone introducing a chainsaw to the team. To effectively use the chainsaw, the researchers must pause their tree-chopping efforts and invest time in learning this new tool.

What typically happens is one group of researchers, especially ones who have found success with the old methods, will resist, stating they are too busy chopping down trees to learn how to use a chainsaw. A smaller subset will try to use the new tools incorrectly, and therefore find little success. They will state, “this ax stinks.” This is where the early adopters and those most interested in finding new ways of doing things will become the most productive on the team. They will stop, learn how to use the new tool and rapidly surpass those who did not take the time to do so.

In our analogy, it is important to acknowledge the chainsaw, representing AI tools, is not yet perfectly designed for the research community’s needs. Just like any emerging technology, AI has room for improvement, particularly in terms of user-friendliness and accessibility—and especially so for researchers without a background in IT. Dell Technologies is working with multiple organizations to improve tools and increase their usability. However, the current state of affairs should not deter us from embracing AI’s immense potential. Instead, it underscores the need for ongoing refinement and development, ensuring AI becomes an even more powerful and accessible tool in the arsenal of life sciences research and development.

The Potential of AI in Life Sciences R&D as a Catalyst, Not a Replacement

By transcending conventional model predictive AI methods and harnessing the power of extractive and generative AI (GenAI), the discovery of enhanced correlations can uncover remarkable insights concealed within overlooked and underutilized existing data, surpassing the ordinary review process and potentially uncovering new avenues of exploration.

It is crucial to understand AI is not here to replace researchers but to empower them. The technology adoption curve still applies, and we are in the early adopter phase. However, early adopters in the pharmaceutical space can gain a significant advantage in terms of accelerated time to market. AI is wonderful at finding correlation and completely inadequate at causation.

Create an AI Adoption Plan, Select the Right Partners and Measure Your Success

To kickstart AI adoption, research organizations across all of life sciences need to have a structured plan. My recommendation is to start by prioritizing AI adoption into specific targets with clear and well-defined pain points that are addressable. In particular, focus on areas where you can measure success. In addition, be constantly vigilant in ensuring data security. Organizationally, this goes beyond compliance—consider protecting data (and the data about your data) at all times.

In addition, select the correct partners who are here to help you in accelerating adoption. At Dell, we have been working overtime to make sure our solutions and AI and GenAI services align with your needs. We’ve made sure our solutions and services focus on a multicloud, customizable infrastructure that prides itself on your organization maintaining data sovereignty.

Lastly, measure your success and expand. It is important that leadership is involved, that your KPIs meet requirements and that everyone in the organization is comfortable with embracing failure and adapting to find success.

For life sciences research organizations, the key decision is to accelerate adoption of AI across their organization. Embracing AI tools and methodologies is not just an option; it is a necessity to stay competitive in the ever-evolving landscape of scientific discovery and therapeutic development. Dell Technologies is here to help with that process and standing ready to work with your organization wherever you are on your journey. Remember, those who master the chainsaw of AI will undoubtedly become the trailblazers in the field, ushering in a new era of accelerated progress.

Source: dell.com

Tuesday, 26 December 2023

Unleash Multicloud Innovations with Dell APEX Platforms and PowerSwitch

Unleash Multicloud Innovations with Dell APEX Platforms and PowerSwitch

Organizations are leveraging multicloud deployments for modern containerized apps to boost revenue, enhance efficiency and elevate user experiences. Multicloud has become a go-to choice for organizations, with Kubernetes at the forefront. Among organizations, 42% opt for Red Hat OpenShift to manage their containers. Yet, multiple clouds can introduce complexity. Modern multicloud container setups demand consistent operations and robust automation for IT peace of mind, enabling a focus on delivering application value over infrastructure management.

The Dell APEX Cloud Platform for Red Hat OpenShift is a turnkey infrastructure jointly engineered with Red Hat to transform OpenShift deployments on-premises. With a bare metal implementation, the platform is designed to reduce cost and complexity of OpenShift deployments, while optimizing workload outcomes and enhancing security and governance.

Unleash Multicloud Innovations with Dell APEX Platforms and PowerSwitch

Reduce cost and complexity. Dell APEX Cloud Platform for Red Hat OpenShift delivers everything you need to rapidly deploy and run Red Hat OpenShift on a turnkey, integrated bare metal infrastructure. The extensive automation enabled by the Dell APEX Cloud Platform Foundation Software slashes deployment time by over 90% while reducing time for complex lifecycle management tasks by up to 90%. Further, we performed over 21,000 hours of interoperability testing for each major release, ensuring predictability and reliability.

Optimize workload outcomes. By optimizing delivery of OpenShift on-premises, the platform helps accelerate application modernization initiatives. Built on the next-generation Dell PowerEdge servers and Dell’s scalable, high-performance SDS, the platform delivers stringent SLAs for a broad range of modern mission-critical workloads. Further, with a universal storage layer between on-premises Dell APEX Cloud Platform and Dell’s APEX Storage for Public Cloud, the platform facilitates simpler movement of workloads across your IT estate.

Enhance security and governance. Based on the cyber-resilient foundation of next-generation PowerEdge nodes, the Dell APEX Cloud Platform accelerates Zero Trust adoption, while providing multi-layer security and governance capabilities built throughout the technology stack. Further, with a bare metal implementation, the platform enhances security by reducing potential attack surface.

Dell Networking for Dell APEX Cloud Platform for Red Hat OpenShift


The network considerations for Dell APEX Cloud Platform for OpenShift are no different than those of any enterprise IT infrastructure: availability, performance and scalability. Dell APEX Cloud Platform for OpenShift is manufactured in the factory per your purchase order and delivered to your data center ready for deployment. The overall solution has been tested with Dell PowerSwitch platforms. The nodes in the Dell APEX Cloud Platform for OpenShift can attach to Dell networking Top of Rack (ToR) switch. These meet the ACP for Red Hat OpenShift network functional requirements which, at a high level, include 25G / 100G NICs, LACP (802.3ad) support, MTU sizes of 1500 for management and 9000 for data, disabling option for IPv6 multicast snooping to ensure proper discovery of nodes and VLAN support tagged 3939 or Native 0 on management ports.

Unleash Multicloud Innovations with Dell APEX Platforms and PowerSwitch
APEX Cloud Platform for Red Hat OpenShift – Fully Integrated Stack.

Put Dell APEX Cloud Platform for Red Hat OpenShift to Work for You


Having an end-to-end stack from Dell Technologies enables customers to build a cohesive and efficient IT infrastructure, allowing a greater focus on core business objectives rather than managing complex and disparate infrastructure components. Use testing on Dell for integrated networking, storage and compute solutions offers several key benefits, including:

  • Seamless integration of Dell networking with Dell APEX Cloud Platform for Red Hat OpenShift, which simplifies deployment, management and maintenance, reducing the risk of inter-operability issues.
  • Optimized and overall better system performance when Dell APEX Cloud Platform for Red Hat OpenShift is deployed with Dell Networking.
  • Single point of support across overall deployment which provides consistent service experience.
  • Dell APEX Cloud Platform for Red Hat OpenShift solution with Dell networking offers competitive pricing compared to standalone components from various vendors.
  • Reduced complexity and efficient management translate into lower operational expenses (OpEx).
  • Regular and seamless system updates across the ACP for OpenShift ecosystem.

Source: dell.com

Saturday, 23 December 2023

Dell and Druva Power Innovation Together

Dell and Druva Power Innovation Together

In a significant stride toward more secure, efficient cloud solutions, Dell Technologies and Druva are expanding their work together. The partnership continues to deliver an innovative data protection solution, streamlining operations and optimizing costs with Dell APEX Backup Services, powered by Druva. This cutting-edge, 100% SaaS backup and recovery service empowers customers to automate processes, save valuable time and resources and bolster resilience against cyber threats such as ransomware.

Dell Technologies has been a trailblazer in innovation, consistently seeking partnerships that bring tangible value to its customers. Through our continued work with Druva, we bring a powerful data protection as-a-Service offering to customers that simplifies day-to-day operations and eliminates infrastructure management.  With an all-in-one solution for backup, disaster recovery and long-term retention, Dell is helping customers meet their evolving data protection requirements.

Dell is focused on elevating its data protection portfolio to offer modern enterprises the flexibility and scalability they need—and at the heart of this endeavor lies the cloud. Dell APEX Backup Services, built on Druva, employs a 100% SaaS approach that revolutionizes how organizations approach resilience and security. Unlocking unparalleled flexibility, scalability and cost-effectiveness, Dell APEX Backup Services empowers organizations to harness the potential of the cloud, ensuring their data is not only secure but also readily accessible whenever and wherever it’s required.

Our customers are always looking to us for a more secure, efficient and simple cloud-driven future to protect traditional and modern workloads from potential risk. Our continued collaboration with Druva reaffirms Dell’s commitment to providing industry-leading data protection solutions that not only simplify data protection but also enhance efficiency and agility for our customers.

Momentum, Growth and Market Trust


Dell’s APEX Backup Services has reduced data protection costs and complexity for more than 1,000 customers since its introduction in May 2021. In just two short years, we’ve secured data for more than 900,000 end-users and increased total data protected by more than 12X. Expanding our relationship with Druva demonstrates our commitment to seizing opportunities when the time is right.

Strengthening Security and Ransomware Resilience


Security threats, particularly ransomware attacks, pose a constant and ever-growing risk to businesses. Recognizing this, Dell has prioritized cybersecurity as a critical aspect of our partnership with Druva. Proven leaders in cyber resilience, Dell and Druva deliver a powerful combination of autonomous protection, rapid response and guaranteed recovery to better protect data and enable ransomware recovery in just hours rather than days or weeks.

“At Druva, we are thrilled to extend our partnership with Dell Technologies, which we believe is a testament to customer trust and market excitement surrounding APEX Backup Services,” said Jaspreet Singh, founder and CEO of Druva. “As ransomware continues to plague businesses of all sizes, Druva and Dell represent an ideal choice for modern IT, delivering accelerated ransomware recovery along with enhanced data security posture monitoring and observability—reducing incident response times and ensuring recovery readiness. With APEX Backup Services, customers embark on their cloud journey with confidence, knowing they have the best-in-class cyber resilience at their side.”

Dell Technologies’ expanded partnership with Druva is a game-changer for businesses seeking top-tier data resiliency. The success stories of our valued customers highlight the tremendous impact that APEX Backup Services can have on organizations.

  • Nuvance Health achieved 70% faster backup times while scaling protection effortlessly and cutting costs for safeguarding Microsoft 365 data, SQL databases and 1,600 VMware virtual machines.
  • The Illinois State Treasury Department reduced platform management time by 80%, securing over $50 billion in assets and sensitive data and bolstering its security posture.
  • TMS Entertainment, Ltd. in Japan eliminated its data center footprint and resolved backup errors instantly, enhancing operational efficiency.

Stay tuned for more exciting updates from Dell Technologies and Druva as we continue to lead the way in data protection and cyber resilience.

Source: dell.com

Friday, 22 December 2023

Boost Efficiency and Update Your VxRail for Modern Demands

Boost Efficiency and Update Your VxRail for Modern Demands

In the fast-paced realm of technology, where progress is measured in leaps rather than steps, the adage “out with the old, in with the new” echoes louder than ever. It’s not just a catchphrase; it’s a battle cry for businesses navigating IT transformation. And the need to refresh outdated technology infrastructure is not merely a choice—it’s a strategic imperative.

To support these imperatives, Dell Technologies is continuously upgrading our flagship HCI offering, VxRail. We recently introduced two new VxRail platforms built on the latest PowerEdge and 4th Generation Intel Xeon Scalable Processors. Today, we’re further expanding our portfolio with all NVMe nodes supporting VMware vSAN Express Storage Architecture (ESA), making VxRail a game changer in terms of density, performance and cost efficiency.

Unmatched Performance for Complex Workload Management


Why does this matter? It’s not just about refreshing the old; it’s about unlocking new possibilities in IT modernization. The new VxRail platform offers a 40% increase in cores, 50% faster memory using DDR5 and doubles the throughput via PCIe Gen5. These enhancements empower businesses to efficiently manage and process complex workloads, including big data analytics and high-end computational tasks.

Easily Embrace AI with VxRail


AI remains the talk of the town. With Intel’s 4th Generation Intel Xeon Scalable Processors and built-in Intel AMX accelerator, VxRail takes AI to a whole new level. While many AI applications require GPUs, the Intel AMX accelerator allows some AI workloads to run on the CPU instead of offloading them to dedicated GPUs, delivering a 3.1x boost in image classification inferencing and a 3.7x improvement for natural language processing (NLP) inferencing. Eliminating the need for GPUs lowers overall infrastructure costs, and better yet, the functionality is supported out of the box with automated lifecycle management as part of the normal VxRail HCI System Software LCM process.

Resource Optimization and Cost-Effective Operations


VxRail with high-density NVMe storage and VMware vSAN ESA also optimizes power and spatial efficiency, yielding more cost-effective operations. There’s a perception that systems with all NVMe are costly. The reality is the optimizations result in capacity and resilience improvements that more than compensates for the initial cost—up to 14% lower cost per TB (raw) and up to 34% lower cost per TBu (usable). Now picture a 3.6x increase in usable storage and a 2.5x increase in VM per host capacity. That’s not only an improvement, but it’s also an investment in your business’s future success.

More Operational Benefits in CloudIQ for VxRail


With intelligent health and sustainability observability, analytics, forecasting and intelligent multisystem/multisite LCM, CloudIQ is your VxRail tech-savvy sidekick. And now, you can do more with CloudIQ with new features like performance anomaly detection, data processing unit and multisite stretched cluster inventory metadata, service request tracking and observability in the CloudIQ mobile app for Android users (iOS to follow).

No Better Time to Refresh


Technology is evolving across all industries, and for early adopters of VxRail, the latest generation opens up a world of possibilities. Run more workloads with fewer nodes, reduce your infrastructure and carbon footprint and adapt to market demands seamlessly.

Source: dell.com

Thursday, 21 December 2023

How to Run Quantized AI Models on Precision Workstations

How to Run Quantized AI Models on Precision Workstations

Generative AI (GenAI) has crashed the world of computing, and our customers want to start working with large language models (LLMs) to develop innovative new capabilities to drive productivity, efficiency and innovation in their companies. Dell Technologies has the world’s broadest AI infrastructure portfolio that spans from cloud to client devices, all in one place—providing end-to-end AI solutions and services designed to meet customers wherever they are in their AI journey. Dell also offers hardware solutions engineered to support AI workloads, from workstation PCs (mobile and fixed) to servers for high-performance computing, data storage, cloud native software-defined infrastructure, networking switches, data protection, HCI and services. But one of the biggest questions from our customers is how to determine whether a PC can work effectively with a particular LLM. We’ll try to help answer that question and provide some guidance on configuration choices that users should consider when working with GenAI.

First, consider some basics on what is helpful to handle an LLM in a PC. While AI routines can be processed in the CPU or a new class of dedicated AI circuitry called an NPU, NVIDIA RTX GPUs currently hold the pole position for AI processing in PCs with dedicated circuits called Tensor cores. RTX Tensor cores are designed to enable mixed precision mathematical computing that is at the heart of AI processing. But performing the math is only part of the story, LLMs have the additional consideration of available memory space given their potential memory footprint. To maximize performance of AI in the GPU, you want the LLM processing to fit into the GPU VRAM. NVIDIA’s line of GPUs is scalable across both the mobile and fixed workstation offerings to provide options for the number of Tensor cores and GPU VRAM, so a system can be easily sized to fit. Keep in mind that some fixed workstations can host multiple GPUs expanding capacities even further.

There are an increasing number and variety of LLMs coming onto the market, but one of the most important considerations for determining hardware requirements is the parameter size of the LLM selected. Take Meta AI’s Llama-2 LLM. It is available in three different parameter sizes—seven, 13 and 70 billion parameters. Generally, with higher parameter sizes, one can expect greater accuracy from the LLM and greater applicability for general knowledge applications.

How to Run Quantized AI Models on Precision Workstations

Whether a customer’s goal is to take the foundation model and run it as is for inferencing or to adapt it to their specific use case and data, they need to be aware of the demands the LLM will put on the machine and how to best manage the model. Developing and training a model against a specific use case using customer-specific data is where customers have seen the greatest innovation and return on their AI projects. The largest parameter size models can come with extreme performance requirements for the machine when developing new features and applications with the LLMs, so data scientists have developed approaches that help reduce the processing overhead and manage the accuracy of the LLM output simultaneously.

Quantization is one of those approaches. It is a technique used to reduce the size of LLMs by modifying the math precision of their internal parameters (i.e., weights). Reducing the bit precision has two impacts to the LLM, reducing the processing footprint and memory requirements and also impacting the output accuracy of the LLM. Quantization can be viewed as analogous to JPEG image compression where applying more compression can create more efficient images, but applying too much compression can create images that may not be legible for some use cases.

Let’s look at an example of how quantizing an LLM can reduce the required GPU memory.

How to Run Quantized AI Models on Precision Workstations

To put this into practical terms, customers who want to run the Llama-2 model quantized at 4-bit precision have a range of choices in the Dell Precision workstation range.

How to Run Quantized AI Models on Precision Workstations

Running at higher precision (BF16) ramps the requirements, but Dell has solutions that can serve any size LLM and whatever precision needed.

How to Run Quantized AI Models on Precision Workstations

Given the potential impacts to output accuracy, another technique called fine-tuning can improve accuracy by retraining a subset of the LLM’s parameters on your specific data to improve the output accuracy for a specific use case. Fine-tuning adjusts the weight of some parameters trained and can accelerate the training process and improve output accuracy. Combining fine-tuning with quantization can result in application-specific small language models that are ideal to deploy to a broader range of devices with even lower AI processing power requirements. Again, a developer who wants to fine-tune an LLM can be confident using Precision workstations as a sandbox in that process for building GenAI solutions.

Another technique to manage the output quality of LLMs is a technique called Retrieval-Augmented Generation (RAG). This approach provides up-to-date information in contrast to conventional AI training techniques, which are static and dated by the information used when they were trained. RAG creates a dynamic connection between the LLM and relevant information from authoritative, pre-determined knowledge sources. Using RAG, organizations have greater control over the generated output, and users have better understanding of how the LLM generates the response.

These various techniques in working with LLMs are not mutually exclusive and often deliver greater performance efficiency and accuracy when combined and integrated.

In summary, there are key decisions regarding the size of the LLM and which techniques can best inform the configuration of the computing system needed to work effectively with LLMs. Dell Technologies is confident that whatever direction our customers want to take on their AI journey, we have solutions, from desktop to data center, to support them.

Source: dell.com

Wednesday, 20 December 2023

Become the Enabler of Next Generation Data Monetization

Become the Enabler of Next Generation Data Monetization

Digital transformation has made the monetization of data grow substantially—and it’s not expected to stop soon. The game changer is that now, artificial intelligence (AI) becomes easily accessible to the enterprises and enables users to extract value from any data type. AI is the most powerful and important technological advancement of our generation. However, AI can expose organizations to new security risks and compliance issues that can be challenging to navigate. To overcome these challenges, Dell Technologies has partnered with Versa Networks to help service providers offer an efficient operational solution.

Data as an Economic Asset


We’ve all heard data is the new oil. The value of data is undisputable, and this is an intriguing concept because it’s a monumental shift of focus away from the technology toward the promise of impactful outcomes for business. Enterprises across industries are looking to data to help with increased profits, reduced risk, greater efficiency, higher customer and employee satisfaction and improved sustainability.

The ability to extract value from large sets of data can mean the difference between success or failure. Creating competitive advantages can come down to the speed and accuracy with which users can analyze data and react accordingly. The challenge is that securely and reliably transferring massive volumes of data can be expensive in terms of bandwidth, compute and security.

For example, data coming from Internet of Things (IoT) may be highly valuable when aggregated, but in its raw form generates massive volumes of data with relatively low information content. It may be beneficial from an analytics perspective to limit traffic by filtering only critical data. In terms of moving the data, WAN bandwidth can be limited by throughput or cost. Transmitting anything other than alerts can add traffic volume and contribute to increased latency on an already overloaded network.

These challenges result in the need to implement a system allowing for secure connectivity to transport the data from multiple sources located anywhere, while controlling the cost of the data transmission. To keep operations simple and support the massive growth of connection points, the network must be automated and self-configurable.

Why SD-WAN SASE?


Secure access service edge (SASE) is the modern network, converging security and networking into a single unified platform that delivers superior protection and performance compared to “bolted together” solutions. SASE enables organizations to securely connect branch offices, users, applications, devices and IoT systems regardless of their location. In addition to providing secure access, SASE enables fast, seamless and consistent application performance, via the cloud, on-premises or a combination of both, ensuring a positive user experience.

SASE includes many networking and network security services, but at the core requires Software Defined Wide Area Networking (SD-WAN), Zero Trust Network Access (ZTNA), Secure Web Gateways (SWG), Cloud Access Security Broker (CASB), Firewall as a Service (FWaaS) and Data Loss Prevention (DLP). SASE services are tightly integrated within a single software stack without the requirement to connect multiple disparate functions, delivering the visibility and control needed to simplify how you connect to and protect your network. 

Simplify the Edge with the Power of Three


Dell Technologies has partnered with Versa Networks to deliver a flexible SD-WAN and SASE solution that allows service providers to monetize advanced data networking for their enterprise customers. With Dell and Versa, telecommunications service providers (TSP) and regional service providers can now propose solutions with a large scope of possible implementations to service businesses of all sizes no matter their requirements. The joint Dell and Versa architecture provides a highly flexible set of capabilities, ranging from the creation of a dedicated tenant utilizing cloud hosted shared components, to the ability to deploy dedicated resources on both service provider networks and the enterprise premises.

If the usage of public cloud is prohibited due to local regulations or enterprise security policies, service providers now have the possibility to deliver sovereign trusted SASE services with control on data path and data storage.

Broad device support is another strong value proposition. Nowadays, enterprise devices are not limited to worker laptops and include smartphones, tablets and IoT devices. Through integration with the mobile network and use of mobile SIM IDs, we enable the service provider to include all the mobile devices within an ecosystem as a part of the enterprise’s consolidated security and connectivity plan.

With Versa Networks, Dell Technologies has launched the Edge Network Security, SD-WAN SASE solution that allows service providers to deliver on next-generation data monetization and network services. For more information, please reach out to the Dell Solution Co-creation team.

Source: dell.com

Tuesday, 19 December 2023

Secure Data, Wherever it Resides, with Proactive Strategies

Secure Data, Wherever it Resides, with Proactive Strategies

In the fast-paced world of cyber threats, data breaches are a relentless force. Cloud environments, experiencing over 39% of breaches in 2022, are not immune. Regardless of location, typical adversaries target data through credential theft and lateral movement, followed by vulnerability exploitation.

Well-protected organizations are actively confronting these challenges by instituting good cyber hygiene, or a set of processes to maintain the security of users, devices, data and networks. Key capabilities include robust Identity and Access Management (IAM) policies, segmentation of networks and tightening of vulnerability management processes. These are foundational elements in a comprehensive approach to reduce the attack surface and protect data. As organizations navigate the intricacies of securing solutions in cloud environments like Microsoft Azure or Microsoft 365, adopting strategies that enhance cyber hygiene becomes imperative.

In today’s digital world, this is not just an option but essential to safeguard against myriad threats and to ultimately ensure the security and integrity of cloud workloads. Let’s examine how you can address these to secure your operations in a cloud environment with the shared responsibility model in mind.

Multi-layered Defense for a Resilient Cloud Strategy


Securing access privileges has long been a cornerstone of cybersecurity and remains the case in the cloud. This demands tightening IAM policies and procedures, which have historically been fragmented across various tools and platforms.

Indeed, many IT environments have traditionally operated in multiple silos, resulting in numerous sets of credentials and fragmented access controls. The result of this fragmentation has created opportunities for attackers to access an environment from its weakest point of control. To counter this threat, organizations must centralize IAM into a single, comprehensive tool for better control over access management across the organization. Centralizing IAM consolidates control, streamlines access management and reduces the number of credentials.

Furthermore, taking a centralized approach to IAM is a pivotal step to aligning with mature cybersecurity and Zero Trust. This approach can enable the use of principles such as Least Privileged Access, which focuses on providing the minimum viable access based on the IT and security needs of the user, application, or device.

Strengthening Network Segmentation


If an adversary does gain access to an environment via phishing or other means, their impact will be limited by the design of the network. A flat network structure allows bad actors to move laterally and cause extensive damage, as most IT assets reside on a single network. The best way to prevent this is commonly known as network segmentation. By having the different parts of the organization’s network walled off, the intruder’s potential lateral movement is limited.

In cloud environments, micro-segmentation can take security prevention to the next level by utilizing software to segment the network down to individual workloads. This granular approach significantly restricts unauthorized access and movement. Additionally, incorporating virtual networks and firewalls into your cloud environments creates a multi-layered network defense strategy. Virtual networks provide structured isolation, reducing the attack surface, while firewalls focus on safeguarding web applications and what can access them. This multi-layered approach vital for maintaining security for data in a multicloud world.

Evolving Toward Continuous Vulnerability Management


Traditionally, organizations have patched vulnerabilities based on maintenance window availability and in response to critical incidents. However, today’s approach requires a more proactive stance, fixing vulnerabilities on a continuous basis and strategically prioritizing and fortifying defenses against the most critical threats. In doing so, organizations proactively mitigate vulnerabilities before they can be exploited.

A key aspect of proactive vulnerability management includes continuous scans for threats and known vulnerabilities through use of a vulnerability management tool. It’s not just to identify the vulnerabilities though but recommends prioritization of these vulnerabilities based on several factors such as potential impact, exploitability and significance of systems, data, and workloads. This continuous vigilance, coupled with an efficient remediation mechanism will enable organizations to swiftly address security vulnerabilities.

Taking a proactive stance toward vulnerability management will enable organizations to effectively mitigate security threats before they can be exploited—offering a solid foundation in safeguarding critical assets within IT, OT, IoT and cloud environments.

Remain Vigilant and Have a Plan in Place


A robust security strategy must not only operate as a guiding plan but should also remain flexible toward evolving threats. The three elements we’ve discussed—strong Identity and Access Management practices, network segmentation and regular patching of vulnerabilities—form the bedrock of a solid security foundation in the cloud.

Dell has recently announced a range of offers with Microsoft technologies to complement these security practices to help fortify organization. These new services include:




With Dell Technologies Services, organizations can rely on us to help you with strategic and technology guidance and expertise, advisory and implementation best practices, and provide your organization with the skills you need to enhance your security posture and improve your cyber hygiene.

Source: dell.com

Saturday, 16 December 2023

Empowering Generative AI, Enterprise and Telco Networks with SONiC

Empowering Generative AI, Enterprise and Telco Networks with SONiC

SONiC, Software for Open Networking in the Cloud, an open-source network operating system, has revolutionized the networking ecosystem. An increasing number of users, a vibrant open-source community and a growing ecosystem of vendors have nurtured its growth. Based on the rapid advancement and adoption of SONiC, industry analysts predict SONiC to be a $5 billion addressable market encompassing switching hardware, software and management and orchestration solutions. Analysts also anticipate SONiC will claim a substantial 15-20% share of the data center footprint across large enterprise and tier 2 service provider segments.

As the networking landscape continues to evolve, so does SONiC. Today, we’re thrilled to introduce Enterprise SONiC Distribution by Dell Technologies 4.2, a release that extends the capabilities of SONiC into uncharted territories, catering to a broad spectrum of use cases, from generative AI (GenAI) to broader enterprise environments to telecommunications. In this blog post, we’ll delve into the highlights of this transformative release and explore how it enhances networking across various industry use cases and scenarios.

Generative AI Advancements


Elevate your network’s capabilities for generative AI with the extension of RoCEv2 support with exceptional performance on Trident4 and Tomahawk 4-based 400G ethernet switches, Dell Technologies Z9432F-ON and Z9664F-ON. We’ve also enhanced SONiC to provide fine-tuned congestion management with Priority Flow Control (PFC) and Enhanced Transmission Selection (ETS), ensuring your network is never monopolized by single class of traffic. Explicit Congestion Notification (ECN) allows devices to communicate congestion intelligently.

Embrace improved performance and efficiency with latest Cut-Through Switching functionality, enhancing your network’s speed and agility with improved traffic path allocation through enhanced hashing algorithms to make sure your AI traffic moves flawlessly, delivering optimal network performance.

Enterprise and Service Provider Use Case Enhancements


Fortify your network’s reliability and redundancy with latest EVPN Multihoming support, enhancing resiliency and efficiency of enterprise data center fabrics. Design robust and flexible data center networks that require high availability, load balancing and ease of management.

Experience unprecedented versatility with extended QinQ functionality, catering to hairpin configurations and multiple switch VNF use cases, to allow nested VLAN tagging for network segmentation and isolation, providing more granular control over traffic flows and security. In telco data centers, it facilitates the efficient encapsulation and transport of customer traffic, ensuring different customers’ VLANs can coexist and remain separate within the service provider’s network.

Security Advancements


Your network’s security just got an upgrade with Secure Boot functionality, helping ensure that only authorized and unaltered code runs during the system’s boot process, thereby minimizing the risk of malicious attacks and unauthorized access. Secure Boot safeguards critical infrastructure and prevents the injection of malware or compromised firmware, reducing the likelihood of data breaches and operational disruptions.

Enablement and Validation with Dell Hyperconverged Infrastructure and Storage Solutions


Unlock a new era of seamless integration with SONiC across Dell Technologies Hyperconverged Infrastructure, VxRail. This enables customers to easily deploy and manage their virtualized environments, reducing complexity and time-to-market for new services. We have also brought the same enablement with SONiC supporting Dell’s PowerFlex Storage solution to deliver software-defined, highly adaptable and scalable connectivity. These validated use cases empower customers to efficiently manage their data, whether it’s structured or unstructured, while providing the agility to adapt to changing business needs and enabling seamless connectivity for storage across on-prem and cloud environments.

Unified Fabric Health Monitoring


Achieve holistic fabric health monitoring with the integration of SONiC with Dell Technologies CloudIQ solution. CloudIQ offers a cloud-based, AI-driven analytics and monitoring platform for Dell Technologies’ storage and networking solutions. It allows real-time visibility and insights into the performance and health of the infrastructure, enabling proactive issue resolution and optimal resource allocation.

Enterprise SONiC Distribution by Dell Technologies 4.2 represents a pivotal step forward in the world of networking, allowing innovative use cases and paving the way for the future of networking. We’re excited to continue our journey of open-source collaboration and bring the benefits of SONiC to an even broader range of industry use cases and applications.

Source: dell.com

Thursday, 14 December 2023

InsightIQ 5.0: Driving Efficiency for Demanding AI Workloads

InsightIQ 5.0: Driving Efficiency for Demanding AI Workloads

Keeping file storage performing at peak levels can be a daunting task. Numerous factors contribute to performance management challenges for unstructured data including data volume and growth, as well as security and application requirements. For example, more than 90% of the word’s data is unstructured, meaning storage administrators need to manage a tremendous amount of content with a variety of requirements. Furthermore, the rise of AI and generative AI (GenAI) is bringing new opportunities and performance requirements for unstructured storage. To gain business insights from AI, it is crucial to keep file storage performing at peak levels. Now organizations can streamline unstructured data performance management and unlock application benefits with the newest release of InsightIQ, which is now available for download.

Dell PowerScale is the world’s most flexible, efficient and secure scale-out NAS solution and is the #1 NAS in the market. Dell utilized this industry leadership to innovate the next generation of file storage performance manager, InsightIQ 5.0. InsightIQ is Dell’s software specifically designed to monitor and manage performance for PowerScale storage. InsightIQ differs from CloudIQ, which provides comprehensive health status and monitoring to the broad portfolio of Dell data center infrastructure. The new InsightIQ 5.0 software expands PowerScale monitoring capabilities by increasing efficiencies that benefit your file storage performance tasks. For example, InsightIQ 5.0 includes a new user interface, automated email alerts and added security. InsightIQ 5.0 is available today for all existing and new PowerScale customers at no additional charge. These innovations are designed to simplify management, expand scale and security and automate operations for PowerScale performance monitoring for AI, GenAI and all other workloads.

Simplified Management


Dell built InsightIQ 5.0 to simplify operations with proactive performance management. The software includes a completely new user interface, a portion of which is shown below, designed to make day-to-day performance monitoring easier for IT administrators. The dashboard has a variety of tiles to display key information at a glance. For example, there is a status tile summarizing the number of clusters the software manages. There is also a tile that displays recent alerts. Another tile shows aggregated capacity for monitored clusters displaying the amount of storage remaining. There are also summaries of cluster performance and percentage of used capacity to rapidly gain an overview of cluster performance and identify any performance bottlenecks. These features are designed to simplify tasks and streamline file storage performance management.

InsightIQ 5.0: Driving Efficiency for Demanding AI Workloads
Section of InsightIQ 5.0 Dashboard.

Expanded Scale and Security


The software for InsightIQ 5.0 is now based on a new Kubernetes platform, enabling seamless PowerScale performance management. The advancements improve the scalability of InsightIQ 5.0 to significantly increase the number of PowerScale systems the software can manage. A single InsightIQ instance can now manage up to 504 PowerScale nodes from a single user interface. InsightIQ 5.0 also offers improved, secure communications for the management of PowerScale infrastructure. InsightIQ 5.0 utilizes TLS 1.3 and LDAP-S for secure communications to the PowerScale systems, helping to keep infrastructure safe. These InsightIQ 5.0 infrastructure upgrades provide state-of-the-art performance capabilities to save IT admins time and provide increased security.

Automated Operations


We designed InsightIQ 5.0 to automate operations to free users of trivial tasks. For example, InsightIQ 5.0 offers new Key Performance Indicator-based alerts with customizable thresholds. Therefore, administrators can receive automated notifications as soon as problems develop rather than constantly checking the software. The alerting capabilities offered in InsightIQ 5.0 will enable users to resolve performance issues rapidly. In addition, InsightIQ 5.0 offers customizable report management for in-depth analysis of file storage performance. The software has also been designed for seamless upgrades and deployments. Existing InsightIQ 4.3, 4.4 and 4.4.1 deployments can easily upgrade to 5.0 software and benefit from the latest advancements.

With InsightIQ 5.0, performance management of Dell PowerScale nodes is easier and faster than ever. The proliferation of AI workloads necessitates resolving any performance-related storage issues, and Dell’s new software provides powerful capabilities to keep PowerScale operating at the highest levels. InsightIQ 5.0 is available to all PowerScale customers at no additional charge and exemplifies Dell’s dedication to simplify IT management.

Source: dell.com

Tuesday, 12 December 2023

Precision Medicine, AI and the New Frontier

Precision Medicine, AI and the New Frontier

If the current buzz is to be believed, we stand at the precipice of a new dawn in healthcare. Let’s put aside the enthusiasm and provide a dose of reality.

Achieving the objective of delivering AI-driven healthcare can be a risky path. Embarking on this pathway with data sets driving diagnosis must be tempered with clinical and analytical governance and oversight. In this blog post, we discuss:

  • The challenges and opportunities of applying AI to precision medicine, which aims to provide personalized and effective healthcare based on data and evidence.
  • The need for data quality, clinical governance and ethical oversight to ensure AI solutions are reliable, safe and beneficial for patients and providers.
  • A proposed five-step process for healthcare providers to explore and implement AI solutions in their organizations, involving data analysis, clinical identification, ROI estimation, MVP modelling and pilot testing.

If artificial intelligence solutions are driven by data, clean non-biased data is required to avoid potential pitfalls in a data-driven outcome. What is “clean non-biased data?” Examples include demographics, labs, appointment schedules, prescriptions, patient reported measures and presenting complaints. Given the current powerful algorithms AI uses, all data points need to be reassessed and algorithms adjusted to weight the data appropriately. For precision medicine to be effective, it must view the patient entirely based on trusted clinical markers.

There is strong evidence the standardization of care is part of an evidence-based clinical journey. Currently, the clinical pathway begins at the point of diagnosis. However, in the near future, it will start even earlier in the process. I believe this will be precision medicine 2.0, where treatment is determined by the patient’s presentation and associated information.

Driving precision medicine across the ecosystem is no easy task. Healthcare providers need solutions that can be easily deployed and positively affect patient outcomes while at the same time understanding how clinical risk is mitigated in the ecosystem.

Managing the transformation process required to deploy AI in healthcare is complex and needs coordinated action. Before deploying AI into healthcare, organizations should establish appropriate governance teams, including clinical ethical and operational teams.

Healthcare providers exploring AI in their operations can follow this five-step process as they embark on this path.

  1. Review the historical data set and define what intelligence you can obtain by leveraging this existing data. Most organizations do not have this expertise in-house and would need to liaise with a specialist team of clinical informaticians and data experts to understand the current environment.
  2. Work with the clinical team to identify key client groups or system workflows where care (defined in terms of the IHI quadruple aim) can be improved by leveraging data more effectively.
  3. Research the potential return on investment by incorporating the data analysis in step one with the clinical target identified in step two.
  4. Finally, the solution will be modeled (MVP), and a pilot will be conducted with a view to a fast deployment into the live clinical environment.
  5. Continue back-testing and working with the clinical teams to ensure relevance and impact.

While many institutions exemplify this approach, others try to chase “the next shiny thing” without a coherent plan. Without a dedicated team driving precision medicine through the organization, the project is set up for failure. This approach supports the careful and coordinated development of AI solutions across the ecosystem.

This is indeed a new frontier. A close partnership between healthcare providers, clinical teams and technology vendors is essential for a positive healthcare outcome for the patient.

Source: dell.com

Saturday, 9 December 2023

Generative AI Readiness: What Does Good Look Like?

Generative AI Readiness: What Does Good Look Like?

Organizations of all sizes in virtually all industries want to infuse the power of generative AI (GenAI) into their operations. How does an organization prepare to take full advantage of generative AI across functions, departments and business units? What are the most important capabilities to build up or acquire?

To Achieve High GenAI Readiness, You Need a Framework


To help you be intentional about your generative AI readiness, we’ve defined a framework that covers six dimensions of readiness:

  1. Strategy and Governance
  2. Data Management
  3. AI Models
  4. Platform Technology and Operations
  5. People, Skills and Organization
  6. Adoption and Adaptation

The following are some highlights of what higher levels of readiness look like for each of these dimensions. Note that these are descriptions of future states for these dimensions, snapshots of your GenAI destination, so to speak.

Most organizations will implement many GenAI projects at the same time they are progressing along these dimensions, and the lessons learned from those early projects will help inform the readiness improvement efforts.

Drive GenAI Strategy with Business Requirements, Use Cases and Clear Governance


In an organization with a high degree of GenAI readiness, business and IT leaders collaborate to set clear objectives aligned to business priorities and actively manage a GenAI project pipeline.

Given the exceptional opportunities for innovation and optimization GenAI brings, it is more important than ever for organizations to achieve consensus in their transformation strategy. Starting with a focused set of strategy workshops, including all stakeholders who will be involved in this transformation, ensures all voices are heard, facilitates the path to agreement and gives everyone a solid vision of the future state of the organization and how to get there.

It’s vital to gain a clear view of the use cases that are most important for the business. Organizations often struggle with prioritization, as potential GenAI use cases extend into every corner of the enterprise. As part of our Professional Services for Generative AI, Dell Technologies has created a use case prioritization tool so business, IT and finance professionals can identify, analyze and prioritize use cases according to business value and technical feasibility.

With new use cases comes potential risk, making it especially important for organizations to have effective oversight of all GenAI projects. This ensures compliance with regulations, risk management guidelines and evolving ethical considerations.

Get Your Data House in Order


Many organizations start their generative AI journey using pre-trained models, which require access to an organization’s data to provide the context needed for successful implementation of GenAI use cases. Whether that data is provided via model tuning or augmentation (e.g., Retrieval Augmented Generation or RAG), delivering good data to the model in a timely manner becomes key to GenAI success.

As such, a high-readiness organization prioritizes scalable data management as a key enabler for GenAI, coordinating discovery, acquisition and curation of data. Business analysts and stakeholders should have access to an easy-to-use catalog of enterprise data resources.

With data management now in focus, organizations can ensure data is clean prior to use, reducing errors and bias and preventing exposure of proprietary information. A good indication of maturity is the use of data models to support both structured and unstructured data, simple integrations, automated transformations and pipelines.

Match the Model to the Use Case and Continuously Monitor Performance


Given the costly, time-consuming and expertise-intensive nature of training a model, many organizations will choose to use techniques such as RAG, prompt-engineering or fine-tuning of a pre-trained model to quickly realize value from GenAI.

The number of choices available to customers when selecting pre-trained models is growing daily, which presents new challenges and new opportunities. Key factors in model selection should include user experience, operations, fairness and privacy, and security.

Selecting the right model is just the start. A high-readiness organization establishes processes for evaluating the performance of its chosen generative AI models, regularly tuning model parameters to optimize effectiveness. Organizations should frequently assess models for safety, fairness, accuracy and compliance.

Build a Solid Technology and Operational Foundation


Once an organization selects use cases and models, they need a trusted platform to implement and run them. The mature organization will utilize a GenAI technology stack appropriate to their use cases, security and data constraints, and ensure these technologies are standardized across the organization and priority use cases. AI data is seamlessly integrated with multiple data sources.

Scalable data management is key to GenAI success, so highly mature organizations will have a GenAI-ready data management architecture such as Dell’s data lakehouse for analytics, with advanced analytics tools.

Level Up Skills and Organization


People with AI skills are well positioned to embrace GenAI. However, there are new skills needed beyond those required for traditional AI. A high-readiness GenAI organization provides training for specialists on platforms and tools, architecture, data engineering and the like. End users learn data analytics principles and how to construct effective prompts. This is supplemented with new support and operations teams dedicated to generative AI.

Manage Adoption and Adaptation


An organization at a high level of GenAI readiness has a clear understanding of where and how generative AI can add value. The initial strategy sessions help create that early view, but this is not a static space. Business and IT must continue to work together to integrate GenAI into new initiatives.

Continuous improvement within GenAI should be standard practice for organizations and can be achieved in a number of ways. Teams can capture human and automated feedback from model outputs and incorporate lessons learned into model training, guardrails and information retrieval.

These organizations integrate automated compliance with corporate policy, data privacy and government regulations into development and deployment processes.

Embark on Short-term GenAI Opportunities and Advance GenAI Readiness


As an organization moves to higher readiness levels, the opportunities for leveraging the benefits of generative AI increase in number and business impact.

But don’t think you need to wait until the readiness dimensions reach a certain level to begin applying GenAI to key use cases. You can and should begin with shorter-term, tactical projects that can provide efficiencies and financial benefits today.

If you’re looking to apply GenAI best practices, Dell Consulting Services can help in many ways. A great place to start is a Generative AI Accelerator Workshop, a half-day interactive strategic session with business and IT leaders to assess your organization’s GenAI readiness.

Source: dell.com

Tuesday, 5 December 2023

Simplifying Artificial Intelligence Solutions Together with EY

Simplifying Artificial Intelligence Solutions Together with EY

The level of market excitement around generative AI (GenAI) is hard to exaggerate, and most agree this fast-developing technology represents a fundamental change in how companies will be doing business going forward. Areas such as customer operations, content creation, software development and sales will see substantive changes in the next few years. However, this leap in technology does not come without challenges—around how to deploy and maintain the infrastructure required in addition to ethical, regulatory and security issues.

To help guide companies in their journey toward transformation, EY and Dell Technologies are collaborating on leveraging EY.ai for developing joint solutions. The EY.ai platform brings together human capabilities and AI to help organizations transform their businesses through confident and responsible adoption of AI. With a portfolio of GenAI solutions and tools for bespoke EY services, customers can confidently enable their transformational AI opportunities.

Simplifying Artificial Intelligence Solutions Together with EY

Take the financial services industry as a great example. A GenAI deployment could quickly help firms with intricate data analytics—taking complex data sets, running queries and identifying trends (that were previously hidden) via much more comprehensive analyses.

Using Proven Dell Technology

Underpinning solutions in EY’s new AI-driven solutions portfolio is industry-leading Dell infrastructure, such as the recently announced Dell Validated Design for Generative AI. With a proven and tested approach to adopting and deploying full-stack GenAI solutions, customers can now prototype and deploy workloads on purpose-built hardware and software with embedded security optimized for generative AI use case requirements. In addition, EY and Dell’s joint Digital Data Fabric methodology and Dell’s multicloud Alpine platform mean customers can be ready to optimize their data across public cloud, on-prem and edge AI applications.

Deliver on Your Priorities Faster and at Scale

One of the largest hurdles for organizations’ AI visions is aligning business demand for innovation and value creation with IT deployment capabilities. GenAI deployments will frequently be a mix of edge, dedicated cloud (aka on-prem) and public cloud layers. With growing supply chain lead times, competition for shared GenAI-specific infrastructure, expanding security concerns and other headwinds, organizations need to unlock the benefits of a multi-tiered infrastructure model to drive sustainable value creation. Dell Technologies and EY’s joint solutions on EY.ai, using Dell validated designs, simplify the AI pathways to business impact. These solutions accelerate an organization’s ability to focus scarce resources on GenAI business results and not on the constraints of IT deployment patterns.

The Smartest Way to Get Started

Accelerate speed-to-value by starting with a focused, yet high-impact, proof-of-concept to demonstrate benefits quickly, while in parallel setting a strategy and establishing a trusted GenAI foundation to rapidly meet expanding needs of the business.

Source: dell.com