Saturday 27 July 2024

Introducing Llama 3.1: Pushing the Boundaries of Open AI

Introducing Llama 3.1: Pushing the Boundaries of Open AI

The AI community is abuzz with anticipation as Meta announced the launch of Llama 3.1, their most advanced and capable AI model collection to date. Building on the success of its Llama 3 predecessors, Llama 3.1 includes Llama 3.1 405B, a model that represents a leap forward in natural language processing, machine learning and open-source AI capabilities, and rivals the best closed-source models in the field. We at Dell Technologies are committed to making these cutting-edge models available on-premises by demonstrating the practical deployment of Llama models on enterprise-grade infrastructure. Here’s a closer look at what makes these models a game-changer and how it is set to transform the landscape of AI.

Unmatched Versatility and Performance


Llama 3.1 models expand context length to 128K, add support across eight languages and include Llama 3.1 405B that boasts an unprecedented 405 billion parameters, making it the largest openly available foundation model. Llama 3.1 405B rivals the top models in AI when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use and multilingual translation. This massive scale enables it to understand and generate human-like text with remarkable accuracy, nuance and contextual understanding. Whether writing essays, answering complex questions or engaging in natural conversations, Llama 3.1 405B excels in delivering responses that are not only accurate but also contextually relevant. This release is poised to have a profound impact on various aspects of AI research and development, particularly in the areas of:

  • Synthetic data generation. The creation of robust and diverse synthetic data sets, enabling the training and evaluation of AI models in a more efficient and controlled manner.
  • Model distillation. The compression and simplification of complex models, facilitating the transfer of knowledge and expertise to smaller, more efficient models.
  • Model customization. The instruction-tuned model can be used for further customization using tools including Parameter-Efficient Fine-Tuning (P-tuning, Adapters, LoRA, etc.)

Responsible Development and Safety Measures


The development of Llama 3.1 405B has been guided by Meta’s strong commitment to ethical AI practices and safety. They are continuing to build out Llama to be a system by providing more components that work together with the model, including reference implementations. The aim is to empower developers with the tools to create their own custom agents and new types of agentic behaviors by bolstering the models with new security and safety tools to help build AI responsibly.

Dell PowerEdge and Llama models: Powering the Future of AI


Strengthening our continued collaboration with Meta, Dell engineers have successfully deployed Llama 3.1 405B model on Dell’s leading compute platforms, both the single node PowerEdge XE 9680 server and a distributed system comprised of two PowerEdge XE9680 servers, connected via InfiniBand (IB) or RDMA over Converged Ethernet (RoCE) for high-speed data transfer and efficient scaling.

The Llama 3.1 Community License Agreement gives organizations the freedom and flexibility to adapt the model to almost any application. The Llama 3.1 405B models will soon be available on Dell Enterprise Hub on the Hugging Face platform at dell.huggingface.co for download as ready-to-use containers, optimized on Dell PowerEdge XE9680. The Dell Enterprise Hub is an innovative portal designed specifically for Dell customers, offering a streamlined approach to on-premises deployment of popular large language models (LLM) on Dell’s robust infrastructure.

Use Cases and Deployment Flexibility


The release of Llama 3.1 405B includes two categories:

  • The pre-trained general LLM
  • The instruction fine-tuned LLM

Each version is released with model weights in BF16. In addition, each category includes a version supporting model parallelism of 16 (for running on two nodes) and supporting model parallelism of 8 running with FP8.

Enterprises can fine-tune the instruction model on proprietary or domain-specific data to create chatbot applications, coding assistants and assistants for customer service agents. Further applying model compression and distillation techniques opens additional environments for serving applications by decreasing the memory needed while benefiting from the accuracy of the 405B model. This enables customers to deploy the distilled model on smaller edge devices and in applications to meet different latency requirements. Moreover, combining retrieval augmented generation (RAG) techniques with the distilled models can enhance response quality. Notably, the large context length support enables more extensive context handling, further improving the effectiveness of RAG techniques and leading to even more accurate and informative responses.

In the upcoming weeks, Dell Technologies will share test results, practical applications and deployment guidelines demonstrating the seamless deployment of the Llama 3.1 405B models on Dell infrastructure. This ongoing collaboration between Dell and Meta highlights our dedication to driving innovation in the open-source AI community and enabling businesses to leverage AI capabilities within their own IT infrastructure. By leveraging Llama 3.1 models and the open-source AI frameworks, researchers and practitioners can explore new avenues for advancing state-of-the-art AI research and applications.

Source: dell.com

Saturday 20 July 2024

Generative AI Release Updates With Dell and AMD

Generative AI Release Updates With Dell and AMD

As AI continues to rapidly evolve, the fusion of open-source technologies and cutting-edge hardware acceleration is driving industry innovation. Dell Technologies and AMD are at the forefront, offering on-premises infrastructure solutions that are up to 75% more cost-effective than public cloud IaaS and tailored to empower generative AI applications in the enterprise.


We’re excited to share a series of updates powering the deployment of open AI ecosystems and frameworks. First, the Dell PowerEdge XE9680 is now shipping with the AMD Instinct™ MI300X accelerators, enhancing AI performance options for our customers. Second, we’ve published the Dell Validated Design for Generative AI with AMD, enabling custom applications development with open-framework foundations. And third, our new deployment services, available in September, will ensure smooth integration of these innovations into your operations.

Now Shipping – PowerEdge XE9680 With AMD Instinct MI300X Accelerators


The PowerEdge XE9680 with AMD Instinct MI300X accelerators offers high-performance capabilities designed for enterprises leveraging generative AI and features eight MI300X accelerators, a combined 1.5 TB of HBM3 memory and 42 petaFLOPS of peak theoretical FP8 with sparsity precision performance.

This powerful configuration enables faster training and inferencing of large language models helping organizations deliver AI-driven insights and innovative applications more efficiently. Recent testing on a single node configuration displayed a leading total cost of ownership value:

  • Deployed the Llama 2 70B parameter model on a single AMD Instinct MI300X accelerator on the Dell PowerEdge XE9680 Server.
  • Deployed eight concurrent instances of the model by utilizing all eight available AMD Instinct MI300X accelerators on the Dell PowerEdge XE9680 Server.
  • Fine-tuned the Llama 2 70B parameter model with FP16 precision on one Dell PowerEdge XE9680 Server with eight AMD Instinct MI300X accelerators.

With simplified deployment through Dell OpenManage Enterprise, intelligent automation via APEX AIOps software and enhanced security featuring integrated cyber recovery and a Zero Trust approach, the XE9680 empowers businesses to rapidly implement and scale their GenAI solutions while maintaining a robust security posture.

Available Today – the Dell Validated Design for Generative AI with AMD


Announced in May, and available today, the Dell Validated Design for Generative AI with AMD is the next step of Dell Generative AI Solutions making it easier for organizations to deploy trustworthy GenAI. This design guidance gives organizations and developers comprehensive directions to implement LLM inferencing and model customization, as well as advanced techniques like fine-tuning and retrieval augmented generation (RAG). Built on open standards and reducing the need for proprietary AI software suites, developers can simplify development and freely customize workflows with open-source LLM models from partners including Hugging Face and Meta.

Accelerate modern workloads. Innovation and scale that enable efficient, agile businesses and outcomes.

  • Powered by AMD Instinct MI300X accelerators, the Dell Validated Design enables near-linear scaling and low latency distributed GenAI training and inferencing.
  • The PowerScale F710 delivers faster time to AI insights with massive gains in streaming performance that accelerates all phases of the AI pipeline.
  • The Dell PowerSwitch Z9664F-ON, offering 64 ports of 400GbE, delivers low latency and high throughput Ethernet fabrics for modern AI clusters.
  • The Broadcom Thor2 AI optimized NIC delivers 400G, interconnecting MI300X accelerators with the industry’s lowest power requirements.

Boost application development. Open-source software and ecosystems allow developers and data scientists to innovate freely.

  • AMD ROCm-powered frameworks extend the Dell Generative AI Solutions ecosystem and include support for open-source large language models like PyTorch, TensorFlow, ONNX-RT and JAX, as well as the full stack of drivers, dev toolkits and APIs for AMD Instinct accelerators.
  • Dell Omnia streamlines the creation and management of AI clusters automating configuration for efficient workload processing.
  • Enterprise SONiC distribution by Dell Technologies delivers a scalable networking solution that combines the open-source SONiC platform with Dell PowerSwitch, offering advanced features and enterprise-grade support.

Dell Validated Designs for Generative AI make it simple for Dell customers to build GenAI platforms tailored to their needs by taking the guesswork out of integration, performance and sizing considerations.

Services to Get Started


Manage the AI lifecycle with confidence, with a future-proofed, open development framework developed for AI. Aligned to this new Dell Validated Design, we’ll be introducing new platform implementation services in September. Trusted experts will help you quickly establish a fully operational platform that is primed for innovation, implementing the necessary tools and framework into your environment and sharing best practices to maintain secure, streamlined operations.

Not sure where to begin? Try an Accelerator Workshop, a half-day facilitated event that is a great first step in determining how your organization can maximize value from AI. These are collaborative and engaging sessions involving key stakeholders, designed to focus on key challenges to help you achieve clarity for your vision. With over 1,000 workshops delivered per year and decades of experience, we’ll help you accelerate AI success.

Source: dell.com

Thursday 11 July 2024

An End-to-End Approach to Sustainability in the AI Era

An End-to-End Approach to Sustainability in the AI Era

With our scalable Dell AI factory and open ecosystem partnerships, we’re helping our customers unlock new possibilities with AI. With these possibilities, also come challenges, like the world’s transition to Net-zero and what impact generative AI will have on climate transition work already underway.

What we know is the world relies on supercomputing to solve many of its greatest challenges like advancing healthcare, transforming the food system and conserving nature. Yet certain AI workloads can be energy intensive. In fact, it is estimated that by 2030, energy demands for AI data centers will equate to around 390 gigawatts (GW), with an additional capacity of 130GW for other applications. That’s 520GW in total—eight times today’s capacity.


So how do we approach this as a global technology leader? How can we help create a future where progress does not come at the expense of our planet?

For decades, we have seen sustainability as a business imperative, so we treat it like one. It’s woven throughout our business operations, influences how we design products and innovate more energy-efficient data center solutions and shapes how we help our customers meet their business and societal impact goals. We take an end-to-end approach to sustainability.

Back End and Front End Expertise


Think of the back end as how we integrate and operationalize sustainability as an organization internally and within our ecosystem. Everyone across the organization plays a role, from our accounting teams to our engineers, global operations and sales leaders. It’s this foundation that enables us to drive sustainable materials innovation with our suppliers and product design teams. It’s how we are exploring the intersection of technology and energy innovation. And how we drive cross-organizational collaboration to meet upcoming regulations and reporting requirements.

The front end is what most people see. It’s the conversations we have with our customers and partners about their sustainability needs—how we’re helping them to right-size their workloads and advising on renewable energy sources. The front end is our sustainable data center solutions like storage and servers built with leading liquid and air cooling, emissions tracking and energy efficiency top of mind. It shows up in the low emissions aluminum and recycled cobalt in our AI PCs and our multipack shipping options to reduce emissions and waste. It’s present in our recycling and recovery services for the responsible retirement or reuse of systems and the reduction of e-waste. We consider sustainability in every offering—including our as-a-Service solutions for more flexible IT management. These solutions could help reduce the overall emissions of IT operations.

An End-to-End Approach to Sustainability in the AI Era

Why is an End-to-end Approach to Sustainability Important?


Rising energy costs, changing regulations and reporting requirements and diverse stakeholder demands are forcing organizations to adopt more sustainable IT practices. However, almost six in 10 customers tell us they think AI will compromise their environmental sustainability efforts, and many customers tell us they are not sure where to focus. We are on a similar journey and this approach, including an annual end-to-end sustainability roadmap, has helped us focus and make progress.

There is more work to do, including how to sustainably manage power hungry GenAI. What we know is managing our sustainability innovation and operations end-to-end is an important foundation to build on as we seize the AI opportunity ahead.

It is a pivotal moment. We’re optimistic, pragmatic and will continue to use the industry’s broadest technology and services portfolio, combined with our team member and partner ingenuity, and global reach to deliver positive business, environmental and societal impact.

Source: dell.com

Wednesday 10 July 2024

Elevate the Self-Service Experience for Your Customers

Elevate the Self-Service Experience for Your Customers

As the world embraces the transformative power of generative AI (GenAI), businesses are racing to unlock new efficiencies that could propel them ahead of the competition. One high-value use case that’s capturing the imagination of our customers is based on the age-old idea of automating routine tasks and repetitive exchanges—but with a modern, AI-fueled twist.

With this in mind, we are now offering the Dell Generative AI Solution for Digital Assistants. Announced at Dell Technologies World this week, this solution is an excellent example of the Dell AI Factory with NVIDIA in action, leveraging Dell expertise across infrastructure, software and professional services.

Aimed at elevating self-service experiences with GenAI technologies and a humanistic AI Avatar interface, this solution allows users to interact with an intelligent digital assistant through natural conversation in practically any language, at any time of day. Organizations can utilize digital assistants to automate a broad range of interactions, from simple queries to multi-step exchanges, saving time and money while freeing up valuable human resources to focus on more strategic, high-impact work. 

What Are Digital Assistants?


Elevate the Self-Service Experience for Your Customers
Broadly speaking, digital assistants are a category of advanced technology that simulate conversations with users utilizing artificial intelligence, natural language processing and machine learning to provide a personalized, conversational experience. Digital Assistants have many possible interfaces, including chat, voice-only, AI Avatars or a combination of these for a multimodal experience.

In an era where customer expectations are soaring and the end-user experience is paramount, many self-service interactions can be significantly elevated with a highly advanced, yet remarkably human-like interface that understands context and provides personalized solutions with empathy, emotional intelligence and non-verbal cues that are absent from other self-service interfaces.

The City of Amarillo is already embracing this technology to make community services more accessible for the city’s diverse citizens, who speak 62 languages and dialects.

Other example use cases include:

  • Customer service and technical support. Digital assistants can assist in answering product or service questions and provide basic troubleshooting support.
  • Employee training. Organizations can personalize training by using digital assistants to create interactive learning experiences, such as selling scenarios tailored to different audiences.

How Can GenAI Digital Assistants Help My Business?


Use cases for GenAI Digital Assistants go far beyond handling simple inquiries. They can proactively guide customers through complex processes, such as troubleshooting, completing forms and making purchases. With their contextual understanding and multimodal capabilities, these assistants streamline exchanges, reduce customer frustration compared to more traditional self-service options and increase overall satisfaction.

Furthermore, GenAI Digital Assistants can continuously learn and improve from each interaction, enabling them to better anticipate customer needs and offer personalized recommendations.

Elevate the Self-Service Experience for Your Customers
This level of tailored service fosters loyalty and drives long-term business growth, extending across a wide range of industry use cases such as:

  • Retail. Digital assistants can be used in online shopping to provide personalized recommendations and assistance. Physical stores can also utilize them for use cases such as end-of-aisle displays in big-box stores, providing a consistent brand representative for customer interactions.
  • Healthcare. Digital assistants can help with patient check-in and discharge, provide support in patient rooms and act as an interface between the patient and care team.
  • Other industries. Digital assistants have applications in various industries, including financial services, travel, insurance, hospitality and telecommunications, among others.
With the power of generative AI, businesses can redefine the boundaries of self-service, providing seamless, intuitive and highly personalized interactions at scale.

Rapidly Deploy Digital Assistants with a Proven Solution


Dell is making it simple for our customers to get their own GenAI Digital Assistant up and running quickly with Dell Validated Design for Digital Assistants, a proven solution that includes infrastructure, platforms and professional services to ensure customer success.

We have validated each element needed for an optimized deployment, leveraging our recognized AI expertise to drive reliability, security and performance, along with trusted Dell hardware such as PowerEdge servers with NVIDIA GPUs and Precision Workstations at the Edge.

This solution allows organizations to swiftly establish and scale on-premises infrastructure tailored to their needs with pre-validated solutions that streamline adoption. This reduces the time spent on design, planning and testing while minimizing risk and accelerating time-to-market.

With the flexibility to start small and quickly scale using a modular approach, organizations can confidently implement a solution that meets business requirements and supports data sovereignty and compliance with high levels of control and regulatory assurance.

Elevate the Self-Service Experience for Your Customers

Bringing the complete solution together, Dell Professional Services for Digital Assistants provide strategic guidance for customer use cases, personalized AI Avatar development, platform implementation with data source integrations and ongoing support to drive continued success, upskilling and optimization.

Source: dell.com

Tuesday 9 July 2024

Live Optics and Azure Stack HCI

The IT industry has coped with many challenges during the last decades. One of the most impactful ones, probably due to its financial implications, has been the “IT budget reduction”—the need for IT departments to increase efficiency, reduce the cost of their processes and operations, and optimize asset utilization.

This do-more-with-less mantra has a wide range of implications, from cost of acquisition to operational expenses and infrastructure payback period.

Cost of acquisition is not only related to our ability to get the best IT infrastructure price from technology vendors but also to the less obvious fact of optimal workload characterization that leads to the minimum infrastructure assets to service business demand.

Without the proper tools, assessing the specific needs that each infrastructure acquisition process requires is not simple. Obtaining precise workload requirements often involves input from several functional groups, requiring their time and dedication. This is often not possible, so the only choice is to do a high-level estimation of requirements and select a hardware offering that can cover by ample margin the performance requirements.

Those ample margins do not align very well with concepts such as optimal asset utilization and, thus, do not lead to the best choice under a budget reduction paradigm.

But there is free online software, Live Optics, that can be used to collect, visualize, and share data about your IT infrastructure and the workloads they host. Live Optics helps you understand your workloads’ performance by providing in-depth data analysis. It makes the project requirements much clearer, so the sizing decision—based on real data—is more accurate and less estimated.

Azure Stack HCI, as a hyperconverged system, greatly benefits from such sizing considerations. It is often used to host a mix of workloads with different performance profiles. Being able to characterize the CPU, memory, storage, network, or protection requirements is key when infrastructure needs to be defined. This way, the final node configuration for the Azure Stack HCI platform will be able to cope with the workload requirements without oversizing the hardware and software offerings, and we are able to select the type of Azure Stack HCI node that best fits the workload requirements.

Microsoft recommends using an official sizing tool such as Dell’s tool, and Live Optics incorporates all Azure Stack HCI design constraints and best practices, so the tool outcome is optimized to the workload requirements.

Imagine that we had to host in the Azure Stack HCI infrastructure a number of business applications with sets of users. With the performance data gathered by Live Optics, and using the Azure Stack HCI sizing tool, we can select the type of node we need, the amount of memory each node will have, what CPU we will equip, how many drives are needed to cover the I/O demand, and the network architecture.

We can see a sample of the sizing tool input in the following figure:

Live Optics and Azure Stack HCI
Figure 1.  Example from Dell's sizing tool for Azure Stack HCI

In this case, we have chosen to base our Azure Stack HCI infrastructure on four AX-750 nodes, with Intel Gold 6342 CPUs and 1 GB of RAM per node.

Because we have used Live Optics to gather and analyze performance data, we have sized our hardware assets based on real customer usage data such as that shown in the next figure:

Live Optics and Azure Stack HCI
Figure 2.  Live Optics performance dashboard

This Live Optics dashboard shows the general requirements of the analyzed environment. Data of aggregated network throughput, IOPS, and memory usage or CPU utilization are displayed and, thus, can be used to size the required hardware.

There are more specific dashboards that show more details on each performance core statistic. For precise storage sizing, we can display read and write I/O behavior in great detail, as we can see in the following figure:

Live Optics and Azure Stack HCI
Figure 3.  IOPS graphics through a Live Optics read/write I/O dashboard

With a tool such as Live Optics, we can size our Azure Stack HCI infrastructure based on real workload requirements, not assumptions made because information is lacking. This leads to an accurate configuration, usually resulting in a lower price, and warranties that the proposed infrastructure can handle even the peak business workload requirements.

Check the resources shown below to find links to the Live Optics site and collector application, as well as some Dell technical information and sizing tools for Azure Stack HCI.

Source: infohub.delltechnologies.com

Saturday 6 July 2024

Three Key Data Protection Takeaways from Dell Technologies World

Three Key Data Protection Takeaways from Dell Technologies World

AI and GenAI were all the buzz at Dell Technologies World 2024. But between the AI glitz of the mainstage and showroom floor, there were some intriguing discussions about data protection, cybersecurity and Zero Trust that underscore the criticality of building a strong data protection foundation on your journey to an AI automated future.

#1 – Don’t Let Data Protection Take a Back Seat on Your AI Journey


Jeff Clarke highlighted in his keynote address that with the AI revolution in full swing, data protection can’t be an afterthought. Clarke is right on the money as only 11% of organizations are protecting more than 75% of their GenAI data, according to ESG Research. Considering the substantial investment organizations are making in server computation, high-speed storage, low-latency networking infrastructure and developer time to generate GenAI output, letting this data go unprotected is a major gamble—even by Las Vegas standards.

Three Key Data Protection Takeaways from Dell Technologies World

And then there’s the security side to consider. GenAI training data is essentially a big game target for bad actors to contaminate. Manipulating or “poisoning” the data that feeds GenAI training workloads can result in all sorts of distortions that can produce inaccurate or biased results that could discredit an organization and damage their brand. This makes it essential for organizations to implement Zero Trust security measures around their data repositories and critical systems to keep cyber attackers at bay. In the event of a breach, the ability to isolate and contain the attack, minimize its impact and restore data from a trusted source like a digital vault, becomes crucial.

#2 – In the GenAI Data Tsunami, Efficiency is King


One of the main points emphasized at Dell Technologies World is that AI and GenAI will significantly accelerate data growth. With GenAI set to double the amount of data it trains on, organizations must have data protection solutions that can efficiently scale across on-premises environments, at the edge and across multicloud infrastructure.

In this new GenAI-driven world, managing an ever-growing data footprint while keeping storage costs down will become the new gold standard. Achieving this will require solutions that offer consistent data protection operations wherever GenAI workloads reside, incorporating automation and high levels of efficiency to handle the vast amounts of data generated by AI factories.

#3 – You’re Only as Strong as Your Weakest Link


During his standing-room-only breakout session, University of Barcelona CIO Goncal Badenes delivered remarks that illuminated the potential cybersecurity vulnerabilities lurking within the depths of any organization’s IT infrastructure. Few would expect that a minimally privileged student account would serve as the entry point for cyber criminals to launch a devastating ransomware attack, crippling every critical system at the University of Barcelona. “Utter panic” was how Badenes described the reaction of IT stakeholders once they fully grasped the scale and scope of the attack. With all core services offline—including the university website, research, faculty, student, billing and payment systems—it was a seemingly worst-case scenario.

“Legally, we cannot negotiate with cyber criminals over ransomware payments,” Badenes explained to the audience. The only option available to IT personnel was to restore data from an uninfected backup copy. Fortunately for Badenes and his team, they found a clean backup copy, and with the assistance of Dell engineering support, they were able to bring all their systems back online without any data loss. Despite Badenes’ assertion that “we got lucky,” the university endured two weeks of systems downtime, during which classrooms reverted to blackboard instruction.

Lessons Learned


It turned out that, in addition to the student not practicing good password hygiene (using a strong password, changing passwords periodically, etc.), the VPN hosting this user account did not have multi-factor authentication (MFA) deployed. Additionally, the attackers exploited a critical security vulnerability on a university server to deploy the ransomware.

In response, Badenes and his staff:

  • Educated users on strong password practices and enforced mandatory password changes.
  • Mandated MFA across the network.
  • Implemented role-based access control and network segmentation.
  • Enhanced detection and response capabilities to identify anomalies and counter cyber threats.

In addition, this experience convinced Badenes and the university board of the need for an isolated cyber vault, completely off the attack surface, to ensure a clean backup copy is always available for recovery.  Although the university managed to recover from a second backup copy, Badenes noted that they were “lucky” this copy had not been infected like the primary backup. He emphasized that a cyber vault would help guarantee a clean backup for future cyberattacks. He also highlighted their investment in analytics within the vault to quickly identify the most recent clean backup, thus accelerating recovery time. With these capabilities in place, Badenes expressed greater confidence in their resiliency and ability to withstand future attacks.

Protecting and Securing Data in the Age of AI


In the age of AI and GenAI, building a strong foundation to protect and secure data is more critical than ever. To address this need, we announced the Dell Solution for AI Data Protection. This comprehensive solution, encompassing backup, recovery and cyber resilience, helps organizations safeguard crucial components such as training data, models and output data. Additionally, we are developing a Dell Reference Design for AI Data Protection using the previously announced, Dell Scalable Architecture for Retrieval-Augmented Generation (RAG) with NVIDIA Microservices, which is set to be published this quarter.

This solution will deliver consistent data protection operations across on-premises, edge and multicloud environments, providing the efficiencies and automation organizations need to manage the vast amounts of data that will be generated by AI workloads, ensuring valuable information is secure and recoverable.

Source: dell.com

Friday 5 July 2024

Resilient and Secure Data Protection for VMware Telco Cloud

Resilient and Secure Data Protection for VMware Telco Cloud

In the dynamic landscape of telecommunications, where 5G networks are rapidly evolving, ensuring data protection is paramount. As communications service providers (CSPs) embrace cloud-native platforms like VMware Telco Cloud Platform, they need robust solutions to safeguard their critical workloads and components. Enter Dell Technologies PowerProtect Data Manager, a comprehensive data protection solution designed specifically for VMware Telco Cloud Platform.

The Challenge: Protecting Modern Telecom Infrastructure


VMware Telco Cloud Platform empowers CSPs to deploy and manage virtual network functions (VNFs) and containerized network functions (CNFs) across distributed 5G networks. With its holistic visibility, orchestration capabilities and operational consistency, the VMware Telco Cloud Platform enables CSPs to modernize their infrastructure efficiently. However, any modernized environment requires a robust data protection strategy to prevent downtime and data loss, and to ensure rapid recovery in case of disasters.

Introducing Dell Technologies PowerProtect Data Manager


Dell Technologies recognizes the critical role of data protection in today’s telecom landscape. The PowerProtect Data Manager is purpose-built to address the unique challenges faced by CSPs deploying VMware Telco Cloud Platform. Let’s explore its key capabilities:

  • Software-defined data protection. PowerProtect Data Manager offers flexible data protection and compliance across applications and cloud-native IT environments. Its software-defined approach ensures adaptability to changing workloads and applications.
  • Unique protection for telco cloud platform components. CSPs rely on VMware’s Telco Cloud Platform components for mission-critical operations. PowerProtect Data Manager ensures the availability of these components without business disruption. Whether it’s management/workload clusters or application instances, your critical workloads remain protected.
  • Autonomous operation. Automated discovery and protection are at the core of PowerProtect Data Manager. It seamlessly safeguards databases, virtual machines, file systems and Kubernetes containers. This results in competent and reliable data protection without manual intervention.
  • Efficient data protection. PowerProtect Data Manager integrates seamlessly with Dell PowerProtect DD series appliances. You can protect data directly to these appliances, with optional replication to secure Cyber Recovery vaults. Efficient, reliable and scalable data protection is now within reach.
  • Self-service backup and restore. Empower data owners with self-service capabilities. From our native interfaces, CSPs can initiate backups and restores effortlessly. No more dependency on specialized IT teams for routine data protection tasks.

This solution is more than just technology. It has been fully developed and verified in in Dell’s Open Telco Ecosystem Lab (OTEL), which validated the effectiveness of PowerProtect Data Manager for VMware Telco Cloud Platform. Dell’s OTEL facilities provided access to the latest cutting-edge technologies and tools to develop this solution. Our hybrid connectivity model enabled seamless collaboration between Dell and VMware through a global, virtually accessible lab. This approach allowed us to validate this solution effectively making it more consumable for mobile operators. Rest assured that you are deploying a solution that meets industry standards and best practices.

As CSPs embrace VMware Telco Cloud Platform, Dell Technologies stands by their side with robust data protection solutions. PowerProtect Data Manager ensures the resilience and security of your Telco Cloud Platform components and workloads. Modernize confidently, knowing that Dell Technologies is safeguarding your critical data.

Source: dell.com

Thursday 4 July 2024

Future-Proofing Your Business With Sustainable IT Strategies

Future-Proofing Your Business With Sustainable IT Strategies

In a world where technology evolves at breakneck speed, the question of sustainability in IT practices has never been more critical. How do organizations and their leaders approach the complexities of sustainable IT amidst new AI priorities and an ever-changing landscape? Read on for a recap from three of the industry’s most prominent thought leaders across Computacenter, Creative Strategies and Dell Technologies.

Key Takeaways


◉ The importance of compliance with evolving regulations
◉ The role of governance in sustainability
◉ The impact of AI on sustainability goals
◉ The necessity of collaboration for collective success

The Regulatory Landscape


Moderated by Carolina Milanesi, President and Principal Analyst at Creative Strategies Inc. and Founder of The Heart of Tech, the panel began with asking each leader—Wendy Coticchia, sustainability champion and SVP Head of Compliance for the Americas, and APAC at Computacenter and Jamila Cowan, Director of Strategic Relationships, Sustainability and ESG at Dell Technologies—about actionable advice businesses should consider within their AI strategies.

Coticchia emphasized the need for businesses to have a plan for compliance, citing the influx of new regulations from across the world, specifically the European Union, United Kingdom and California. She highlighted the importance of internal alignment on whether to comply with the evolving regulations: “It’s about deciding, are we just looking to comply with the law, or do we want to do more because we think that gives us a competitive advantage? Or is our purpose such that we want to make meaningful change? Once you’ve got that alignment, then you really can start to prepare.”

Future-Proofing Your Business With Sustainable IT Strategies

Sustainability Beyond Environmental Factors


Milanesi pointed out that while many focus solely on the environmental aspect of sustainability, it encompasses much more, including supply chain operations and pay equity: “If we double down on sustainability, what do you think the core role of sustainability is in really safeguarding your future from a business perspective?”

The Role of Governance


Cowan stressed the significance of an end-to-end approach to sustainability and collaboration across organizations, especially in a period of increasing stakeholder demands, rising energy costs, new regulations and reporting requirements: “We’ve centralized our ESG operating model and governance. We’ve adopted a hub and spoke model that includes participation from every aspect of our business. And so, this has been a game changer for us.”

Sustainability and AI


The panelists agreed that AI had a significant role to play in sustainability, despite its energy-intensive requirements. Coticchia highlighted the need for companies to work with trusted partners who include sustainability and social responsibility in their AI solutions: “Are we utilizing renewable energy? Are we using the AI itself to make our solutions more efficient? Are we considering things like circular services so that we’re ensuring we’re recycling or reusing or repurposing those devices?”

Cowan shared insights from Dell’s Concept Luna, explaining how learnings from the project have been applied across their product portfolio: “Sustainability isn’t just the responsibility of the sustainability team.”

The Path Forward


Concluding the discussion, each leader emphasized both the urgency and importance of sustainable IT strategies in business. The panelists urged businesses to start their sustainability journey, leverage AI for positive impact and foster collaboration for faster progress. “We can go further, faster together,” Cowan said. “We have customers who are just beginning their sustainability journey, and it’s more of a conversation around how we set our goals.” The session was a reminder that sustainability is a shared responsibility. Sustainability requires transparency, trust and collective action to ensure a viable future for businesses and the planet.

Source: dell.com

Tuesday 2 July 2024

Dell’s AI Infrastructure Makes Waves in Forrester Report

Dell’s AI Infrastructure Makes Waves in Forrester Report

Dell is excited to announce its recognition as a Leader in The Forrester Wave: AI Infrastructure Solutions, Q1 2024. We believe this recognition highlights Dell’s pivotal role as a modern infrastructure provider, helping customers simplify, streamline and safeguard their AI adoption within the dynamic landscape of artificial intelligence.

A Look Inside the Forrester Wave


For those unfamiliar, The Forrester Wave assesses the strengths and weaknesses of key players in tech markets like AI infrastructure, evaluating vendors based on specific criteria. In The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, Forrester stressed the importance of platforms optimized for generative AI (GenAI) workloads, with a focus on data prep, model training and model inferencing.

Dell’s AI Infrastructure Makes Waves in Forrester Report

The Results Are In: Dell a Leader in AI Infrastructure


The Forrester Wave evaluated the 12 AI infrastructure providers that matter most based on their current offerings, strategy and market presence. Here’s why Dell was recognized as a Leader:

Current offering. Dell received the highest scores possible in the criteria of architecture, configuration and training. Forrester acknowledges the complexities of running diverse AI workloads for organizations, yet Dell is dedicated to simplifying this with our “meaty” reference architectures. The Forrester report also cited Dell’s extensive portfolio, including our acceleration-optimized PowerEdge Servers and the wide range of storage solutions Dell offers, such as PowerFlex, PowerScale and more.

Strategy. Dell achieved the highest scores possible in the vision, partner ecosystem and supporting services and offerings criteria. The report states, “Dell’s superior vision is to offer the quickest, most integrated solution to enterprises for on-premises and colocation deployments.” The report also states, “Reference customers appreciate Dell’s exceptional level of service in quickly designing custom AI infrastructure that integrates with its existing IT infrastructure.”

Market presence. Dell’s significant market presence stems from its global reach and extensive customer base. According to the Forrester report, “Dell is a good fit for enterprises that wish to deploy on-premises or at a colocation [facility] and want an ongoing partnership to smoothly evolve AI infrastructure as demand increases.”

Accelerating AI Workloads: Dell Powering the Future of AI Infrastructure


AI is transforming how we work and innovate. Organizations need the right data, strategy, technology and tools to take proof of concept to proof of productivity. They need the path to be simple, have control over their models and maintain their data sovereignty. Dell makes this a reality by bringing AI to the data. Here’s how:

Easy button for the enterprise. Dell drives end-to-end GenAI outcomes by providing comprehensive solutions for IT and data scientists to apply AI and boost productivity. By helping customers right-size their models for specific use cases and applications, and ensuring seamless integration across the AI stack, Dell amplifies the value from AI deployments. Whether it’s workstations, edge, core or cloud, Dell delivers a consistent experience yielding positive outcomes, regardless of data growth or diversity. Bringing Dell AI solutions to the data results in more efficient and effective models, reduced costs and energy savings.

The latest example of Dell’s continuing AI innovation and end-to-end outcomes is the Dell AI Factory with NVIDIA. Announced just this week, the Dell AI Factory with NVIDIA is the industry’s first comprehensive, turnkey AI solution designed to address the complex needs of enterprises seeking to leverage AI. It integrates Dell’s compute, storage, networking, client devices and software capabilities with NVIDIA’s AI infrastructure and software suite. The Dell AI Factory with NVIDIA supports a wide range of AI use cases and applications, and also offers enterprise-grade professional services to accelerate AI adoption. Additionally, it is available through Dell APEX subscriptions, providing pay-as-you-go flexibility.

Broadest GenAI solutions and services portfolio. Dell boasts the world’s broadest AI solutions portfolio from desktop to data center to cloud. With a diverse silicon lineup including NVIDIA, AMD, Qualcomm and Intel, we offer versatile and powerful solutions for advanced AI applications. Our GenAI solutions are purpose-built for AI use cases, enhancing time-to-value and mitigating AI adoption risks. For example, Dell Generative AI Solutions with NVIDIA provide customers with integrated, turnkey solutions that are validated to support all aspects of the GenAI lifecycle, from model training and tuning to inferencing and retrieval augmented generation (RAG). In comparison to public cloud alternatives, Dell AI solutions provide superior control over data and who can use it.

Broad, open ecosystem of partners. As a strategic integrator across the AI stack, Dell has built a broad, open ecosystem of independent software vendors (ISVs) and diverse AI models. This ecosystem encompasses both commercial and open-source developer tools, offering complete flexibility to meet any need. We collaborate with AI leaders like NVIDIA, AMD, Hugging Face, Meta, Starburst and a wide array of partners spanning colocation, silicon vendors and global systems integrators. Our commitment extends to providing support for software models and machine learning frameworks, as well as access to an extensive data analytics toolbox. This comprehensive approach is designed to streamline any AI journey, minimize risks and expedite GenAI onboarding for our customers.

Level Up Your AI with Dell


Dell AI solutions are tailored for the most essential AI use cases, with leading infrastructure from the desktop to the data center to the cloud. By bringing AI to our customers’ data, we drive efficiency while ensuring robust data security and control. Right-sizing models to each customer’s unique needs not only trims costs but also reduces energy consumption. Our expert services, open partner ecosystem and Dell GenAI solutions promise smooth AI integration with minimized risk, setting us apart in the industry.

Source: dell.com