Thursday, 29 February 2024

Shift Device Refresh to a Hardware Lifecycle Management Leader

Shift Device Refresh to a Hardware Lifecycle Management Leader

The increased digitization of goods and services has positioned PCs as the lifeblood of every modern business. And keeping pace with the latest updates to BIOS, drivers, Windows operating systems and applications is putting a strain on IT operations. Some factors at play include:

  • Return to office. While hybrid workforces are here to stay, a recent IDC study shows many workers (78%) have already returned to the office permanently. Those devices that were deployed when workers went remote in 2020 are nearing their end-of-life date.
  • AI. IDC forecasts artificial intelligence PCs to account for nearly 60% of all PC shipments by 2027. As you refresh your PC fleet, you will want to incorporate NPU-enabled configurations for the use cases that benefit from enhanced local AI.
  • Windows OS transition. Windows 10 will go out of support on October 14, 2025, and the current version (22H2) will be the last version of Windows 10.

When I talk to customers about device refresh, I often hear they are burdened with bubble-like refresh events that are disruptive, cost more than they should, require IT to stop focusing on business-led initiatives and often result in a negative end user refresh experience. And with today’s hybrid environment, managing a decentralized workforce makes the refresh process even more complex, costly and disruptive.

Organizations struggling with these challenges should consider partnering with a global leader in PC refresh management to design and run an effective refresh program that encompasses:

  • Automation of the entire refresh process, from system build to end user communication.
  • Optimization of IT resources and a predictable refresh budget.
  • A non-disruptive refresh experience for end users.

Bringing Together Dell’s Award-winning Hardware With its Hardware Lifecycle Management Expertise


Dell Managed Services for device refresh delivers a strategic PC refresh program that includes roll-out planning, procurement forecasting, end user communication and deployment automation.

Device refresh should not be disruptive to your business. Dell Technologies expertise brings planning tools and patented intellectual property to automate the refresh process. From strategic timing to efficient execution, you can hold Dell accountable to deliver exceptional refresh outcomes.

You can also rely on Dell for demand forecasting and control of end user device options to maintain clear refresh cost visibility over time. This enables a seamless supply chain, with timely and targeted availability, to keep your fleet refresh on track and in budget—all at a predictable cost per device.

Dell manages interactions and communications with end users throughout the refresh process, from request to deployment, inclusive of escalations and incident resolution. Our commitment ensures the refresh experience for your employees is productive and easy. For end users, getting a new PC should be like receiving a great gift— fun and exciting.

Managing the Device Lifecycle Beyond Day One


Once PCs are deployed and your end users are happy, there will be continued events that will occur over the span of the PC lifecycle where Dell can bring tremendous value. The Dell Lifecycle Hub ships new hire kits the same day or next business day of their request, facilitates whole unit exchange and can reclaim and recycle devices with NIST-level data wiping either onsite or remotely. Lifecycle Hub ensures your devices have more than one life by cascading reclaimed devices to end users in your organization. It also allows organizations to trade in their devices to get credits for the purchase of new Dell devices.

Time to Focus on Business-led IT Initiatives


As you consider your PC refresh needs and PC hardware lifecycle management needs, consider Dell Managed Services. Shift tedious and time-consuming work to us so you can focus on your business-led IT initiatives. 

Our PC hardware lifecycle management expertise spans the world, entails patented intellectual property and has been utilized by hundreds of customers for millions of device refreshes. Reach out to your Dell account representative today to learn more about how Dell can help you manage your upcoming PC refresh and your PC hardware lifecycle.

Source: dell.com

Tuesday, 27 February 2024

AI Technology Makes Self-Healing PCs a Reality

AI Technology Makes Self-Healing PCs a Reality

When was the last time your day got totally derailed by a seemingly minor issue—a car that wouldn’t start, a stumble on the stairs, a PC running slow? Those things you never give a second thought can wreak havoc when there’s an unexpected problem. Suddenly, you’re scrambling for a rideshare, hobbling around on a sprained ankle and missing a proposal deadline.

This blog post isn’t about how to cope with life’s unexpected daily life challenges though. It’s about how to avoid ever having to deal with them in the first place. Because what if the thing that fails—your car, your weak ankle, your PC—could just instantly fix itself, without you having to lift a finger? I can’t speak for the first two, but I can tell you that right now, the PC that heals itself is a reality that’s coming this spring to users of ProSupport Suite for PCs.

But I’m getting ahead of the story.

Present State: PC Triage, Treatment, Healing


The way the world keeps its PCs up and running has always been much like the way we keep the human body working right. Take that sprained ankle I mentioned earlier. If it’s bad enough to keep you from getting around, and your usual ice and compression aren’t doing the trick, you’re going to seek help from an expert. You’ll call your doctor’s office, or maybe head over to a minor emergency clinic. If you’re lucky, they won’t be too busy, and they’ll quickly assess and treat your injury so you can be on your way.

That’s pretty much the current model for PCs, too. If your machine is running painfully slow, and the usual restarts aren’t helping, you go to IT with your issue so you can get the problem fixed and get back to work. And just as with our sprained-ankle analogy, you hope they’re not too busy and can resolve the issue quickly. Of course, they’re hoping the same thing, because the sooner users can get back up and running, the more productive everyone—IT admins and PC users—will be.

Enter AI: Self-healing Automation for PCs


With the help of AI, the traditional triage-and-treatment model for healing PCs gives way to a more proactive approach, in which telemetry data from networked PCs can activate processes to automatically detect and resolve PC problems—before the end user even realizes anything is wrong.

Going a step further, this AI-powered self-healing can detect issues that are likely to cause problems in the future and address them across the networked environment before the problems arise. To continue with our healthcare metaphor, think of that as a vaccine against certain conditions—like a flu shot that keeps a patient from ever getting sick in the first place.

This emerging approach, made possible by AI, is all about creating an environment for PC health where persistent issues across a fleet of machines can be automatically resolved in less time and with minimal human intervention—or even no human intervention. That, in turn, leaves IT free to focus on more critical and more strategic priorities.

Coming this Spring: Self-healing Automation from Dell ProSupport Plus


I’ve been talking in metaphors and hypotheticals so far, but I’m excited to now introduce the revolutionary new real-world implementation of PC self-healing that will be a part of the Dell ProSupport Plus offering beginning this spring. That’s when ProSupport Plus customers who are connected to Dell SupportAssist AI technology can take advantage of self-healing automation to optimize PC performance and resolve a variety of PC issues—without requiring active intervention from IT and without disrupting users.

Initially, a library of IT-driven scripts will be available that AI can launch across fleets to resolve issues like blue screen errors and thermal issues. Over time, we’ll continue to refine scripts, and we can develop more scripts to further broaden the scope of these capabilities.

I’m thrilled to be part of realizing the vision of AI-powered, self-healing automation and delivering an unprecedented and ever-expanding set of capabilities for IT support. It’s going to change how IT delivers support and how users experience it—forever, and for the better.

Source: dell.com

Thursday, 22 February 2024

What Do Automation and Martial Arts Have in Common?

What Do Automation and Martial Arts Have in Common?

As a child growing up in New York, I was not the biggest or most athletic kid on the block and as was typical at that time, it led to all kinds of “interesting” interactions with various other people. As a result, my father enrolled me in my first set of Karate classes to help me learn some self-defense and to toughen me up mentally and physically. Over the years, I’ve had the great opportunity to practice Karate, Kung Fu, Krav Maga and several other martial art disciplines. One thing that struck me recently was the similarity between martial arts practice and what we need to do as we modernize the telecom network and its operations.

In martial arts, you’re constantly assessing information from your surrounding environment (as well as yourself), analyzing that information to find common patterns, and then formulating an appropriate response to prevent something bad from happening—sometimes proactively. In essence, you have your own closed-loop automation system you are continuously improving based on intelligence, training and experience.

We’re increasingly asked by our telecom customers how to cloudify the telecom network and reap the economic and agility benefits of such a transformation. And we realized we needed to invest in a set of software tools at the infrastructure layer to make the benefits seen in various verticals true in the telecom environment. While there are all levels of expertise in building telecom cloud networks, we see many communications service providers (CSPs) who are closer to a Karate white belt, as opposed to a black belt, in their ability to operate these modernized networks in an efficient and effective manner.

In telecom networks, automation has long played an essential role. But as infrastructure transforms from tightly integrated vertical stacks to open, horizontal layers, the number of moving parts that must work together goes way up. What was once a mostly single-vendor environment is made up of technologies from multiple vendors throughout the stack. The only way to overcome this complexity and reap the benefits of a modern telco cloud—flexibility, agility, efficiency—is to deploy vendor-agnostic, comprehensive automation that integrates into your operating environment.

At Dell, we’re committed to helping communications service providers do exactly that. Designed from the ground up to support open, cloud-native networks, the new Dell Telecom Infrastructure Automation Suite simplifies and accelerates your cloud transformation by automating the management and orchestration of the network infrastructure. The suite unifies management and orchestration in a single plane and is easily extensible to any infrastructure controller up to, but not including, the CaaS layer. With the suite, you can enable your teams to meet your unique business requirements, as you automate and standardize operations for flexibility and agility, as well as lay the groundwork for AIOps improvements in the future.

A Closer Look at the Dell Telecom Infrastructure Automation Suite


What Do Automation and Martial Arts Have in Common?
Dell Telecom Infrastructure Automation Suite

At the highest level, the suite is a vendor-agnostic software solution that automates the management and orchestration of open network infrastructure. The platform provides a management and orchestration plane for cloud infrastructure and features an open architecture for easy extensibility. Having the capability to deploy servers to CaaS and to automate infrastructure lifecycle operations, it is the central automation controller for multicloud environments, extending from hyperscalers to the core cloud, enterprise cloud, edge and RAN. It seamlessly orchestrates domain-specific infrastructure controllers, promoting cross-domain automation and bridging the vertical silos that once existed in telecom environments. This integration capability allows for seamless operation with existing OSS/BSS systems, preventing the creation of new automation islands. In addition, the suite includes a controller for automating the deployment and lifecycle management of bare metal servers, consolidates inventory and aggregates infrastructure telemetry, laying the groundwork for AIOps.

APIs enable easy integration of the Suite with higher-level network orchestrators, such as a domain orchestrator in the Core or an SMO in the RAN.

Plug-ins connect to resource controllers for servers, CaaS, storage and network equipment to the platform. You can choose either off-the-shelf plug-ins or work with Dell or a systems integrator to build custom plug-ins. At launch, we’re offering an off-the-shelf plug-in for Red Hat.

TOSCA-based blueprints bring it all together to enable CSPs to implement their workflow and intent as declarative blueprints and automate execution consistently without manual errors and delays. Dell will offer basic blueprints with an SDK, but you can create your own blueprints or work together with Dell Services or a systems integrator. The use of TOSCA-based templates ensures blueprints are portable and fit well into a GitOps operating model, allowing CSPs to adopt modern cloud operating models and extend Infrastructure-as-Code to the infrastructure layer.

To support the suite, the Dell Services team is ready to assist you from day 0 tasks to day 2. For day 0 discovery and design requirements, Dell Services offers business outcome workshops, tailored design and blueprint development, configuration of your cloud platform stack and design fine-tuning. As you deploy and integrate on day 1, we can simplify deployment with remote implementation and drive specific outcomes with custom integration, including integration of your DevOps tools (Git, ArgoCD, Kafka, etc.). We can also help with day 2 management and support activities by simplifying upgrades, updates, rollbacks and expansions, providing comprehensive 24x7x365 assistance with proactive predictive failure detection, accelerating issue resolution with restoration SLAs and enabling you to leverage a dedicated telecom-trained account team.

Core Capabilities of Dell Telecom Infrastructure Automation Suite


The suite offers a list of capabilities and benefits that is too long to fit into one blog post, but here are the highlights: 

  • Lifecycle management and fault remediation enable operators to consume infrastructure through a CI/CD workflow as they move to cloud operating models, especially around day 2 operational issues.
  • Customizable blueprints and plug-ins enable CSPs to implement their workflow and intent as declarative blueprints, automate the execution consistently without manual errors and delays.
  • Golden configuration and drift detection enable the detection of configuration drift and facilitate replacement and rebuild of servers (integrating storage with CaaS is on the roadmap).
  • Aggregation and exporting of telemetry enable CSPs to implement service assurance, predictive analytics and closed-loop automation with AI Ops.

The move to a modern cloud-native network is gaining momentum across CSP networks. Many are finding it is critical and essential to remaining competitive in today’s markets as they consider their future as a Digital Service Provider and want to move at the “speed of software” in rolling out new services. In the ever-faster-evolving telecom market, an automated cloud operating model is the best way to achieve the flexibility, agility and efficiency at scale that CSPs require to innovate and compete, while controlling costs.

Source: dell.com

Tuesday, 20 February 2024

Empowering AI: The Critical Role of Dell Connected PCs

Empowering AI: The Critical Role of Dell Connected PCs

In the era of artificial intelligence and machine learning, the demand for enhanced computing power and connectivity options is more urgent than ever. Dell Connected PCs play a pivotal role, offering a robust foundation for AI systems. This blog post explores the essential elements that make Connected PCs indispensable in the age of AI, focusing on the benefits of staying connected while utilizing advanced AI tools.

Bringing AI-level Productivity to Everywhere Work


Connected PCs seamlessly marry the convenience of smartphone-like connectivity with the robust capabilities of personal computers, thanks to the integration of mobile broadband modules. This integration ensures a secure and high-speed connection to both 4G and 5G cellular networks, providing users with a reliable link even when operating beyond the confines of trusted Wi-Fi networks.

The embedded mobile broadband not only facilitates efficient data transfer for AI applications, but it also guarantees a seamless user experience by enabling access to the power of AI tools like Microsoft 365 Copilot regardless of location. This connectivity paradigm empowers users to stay productive and engaged with advanced AI functionalities, ensuring the benefits of local processing and rapid data transfer extend to a truly mobile and interconnected work environment.

Microsoft Copilot, integrated into Microsoft 365, combines the power of large language models with data in Microsoft Graph and Microsoft 365 apps. This integration turns words into a powerful productivity tool, enhancing creativity, unlocking productivity and upleveling skills.

With Copilot, staying connected is more than a necessity—it’s a catalyst for transformative work in Microsoft applications. Whether that’s jump-starting the creative process in Word, creating beautiful presentations with simple prompts in PowerPoint or enabling quick analysis of trends in Excel, being able to rely on a stable and secure internet connection whenever work is being done is essential to effectively utilize tools like Copilot.

When paired with advanced AI tools like Microsoft 365 Copilot, Connected PCs become indispensable assets for modernizing the workplace. The combination of high-speed data transfer, improved reliability and seamless integration with AI tools positions Connected PCs as the driving force behind the next wave of innovation.

Source: dell.com

Saturday, 17 February 2024

Achieve Business Resiliency at the Retail Edge

Achieve Business Resiliency at the Retail Edge

Dell Technologies is committed to helping retailers deploy more intelligent systems and technologies into their business operations to improve service and drive efficiencies, from computer-vision assisted loss prevention, robotics and AI-assisted pick and pack, to self-service kiosks and pick-up lockers. These applications depend on software—increasingly AI-based applications—running on smart devices that live inside the retail store, including servers, storage, handheld scanners and POS systems. When you add in the network of interconnected cloud and enterprise services that support the backend of these systems, you need solutions to help keep your systems running smoothly.

This rise in devices and an increasingly complex software ecosystem presents challenges. As retailers deploy more intelligence in their stores, warehouses and supply chain, managers and frontline service workers are becoming less capable of diagnosing and troubleshooting problems themselves, creating greater risk of system failure and significant downtime. IT cannot maintain full-time staff across hundreds or thousands of store locations, so retailers need to think differently about support. That is why solutions like Dell NativeEdge are critical to bridging the service gap, reducing the overall support workload and improving business resiliency.

Automating the Edge for Resiliency


To address these challenges and more, we are working with Centerity Systems to provide wall-to-wall observability on edge devices to help maintain business continuity and customer satisfaction. By deploying virtual agents into your edge and across your multicloud environments, this solution can help you quickly discover and remediate technical issues and automatically escalate tickets to IT where needed. This helps reduce the overall mean-time-to-repair by up to 75% and helps ensure your edge assets are operational and delivering value for your business.

This solution is blueprinted and deliverable quickly through the Dell NativeEdge operations platform, which enables rapid proof of concept and ease of management for edge applications, like Centerity. By simplifying the operational complexity at the edge, NativeEdge helps retailers quickly achieve scale, with zero-touch deployment of infrastructure and applications at the edge, and blueprints to quickly deliver and update applications across all of your edge locations. By using Zero Trust to secure your edge environments, NativeEdge helps protect your edge from cyberattacks and malicious actors to keep your business running smoothly.

The Future of Intelligent Retail


Edge technologies offer many opportunities to retailers that are looking to drive efficiencies and improve the customer experience. In order to be resilient, business leaders must consider not only how to deploy and scale edge applications, but also how to manage the litany of smart devices that make up their edge. Running Centerity on NativeEdge helps retailers deploy applications quickly and achieve business value in less time.

Come see how Dell is transforming the retail edge so you can run your business securely and at scale. We’re constantly expanding our partner and OEM customer ecosystem with powerful solutions like Centerity to help you get started on your NativeEdge journey today.

Source: dell.com

Thursday, 15 February 2024

Intel and Dell: Sustainable Computing with Immersion Cooling

Intel and Dell: Sustainable Computing with Immersion Cooling

Together, Intel and Dell Technologies continue to enable innovations that improve lives, increase productivity and unleash creativity. With every generation of processors, energy efficiency is increasing: more computations can be performed per watt of electricity needed by the servers. However, the demand for compute performance is insatiable, leading to processors being designed to run at higher wattages to meet the performance demand.

Cooling these higher thermal design power (TDP) processors becomes a challenge from both a technology and energy consumption standpoint. Traditional cooling systems, which consume up to 40% of a data center’s energy consumption, weren’t designed to eliminate the higher amount of heat generated. Cooling the data center through immersion technology enables cooling higher TDP parts while being more sustainable through reduced electricity (and water) consumption.

In this blog series, we will present sustainable approaches to deploying the latest in compute architectures using different cooling techniques. One way to improve sustainability is to reduce the amount of cooling needed in the data center space. We can see how to do this by considering how heat moves and how that drives cooling equipment in the data center. Heat moves from hot to cold, and the more heat that needs to be moved, the larger the temperature difference between the hot and cold needs to be. In the case of computers, one of the most important “hot” parts is the processor, while the “cold” part is the air around the data center where we want to release the heat. If the temperature difference between the processor and the outside air is not large enough, we need to use refrigeration compressors to create a bigger temperature difference. These compressors require more energy than pumps and fans by themselves, so our goal is to make the data center cooling so effective that we don’t need the compressors all or most of the time.

One of the cooling technologies that could improve data center energy use is immersion cooling. In this technology, the servers are submerged in a liquid that won’t conduct electricity and is compatible with the components in the computer. There are no fans in the server, which alone saves data center power. There is no need to move large amounts of air in the data center facility, as immersion tanks typically collect nearly all of the heat from the compute directly into the liquid. Because the heat is captured by the fluid within the immersion tanks, there are additional savings at the facility level with the elimination of facility fans that would deliver air to the cold aisle in front of the racks in an air-cooled data center. Compressors may be removed from the cooling system in some cases, compounding the energy savings. With some compute configurations, the hot coolant leaving the compute may even be useful for other applications, like district heating or industrial processes.

One of the key challenges facing the technology is that the thermal performance of current immersion systems can be limiting for some processors and servers. However, Intel and Dell have combined to maximize the opportunities to deploy sustainable compute in immersion. Intel Xeon processors, such as the Xeon Platinum 8480+ or the Xeon Platinum 8470N, let users take advantage of the latest Intel compute architecture, built-in accelerators and security features for greater performance. These processors are also more efficient than previous generations, providing an average 2.9x better performance per watt across a variety of workloads. With more built-in accelerators than any other CPU on the market, 5th Gen Intel® Xeon® Scalable processors deliver outsized performance and total cost of ownership for AI, database, networking and HPC workloads. You can see up to 10x higher performance per watt using built-in accelerators on targeted workloads. These processors are also higher in TDP, which Dell has mitigated through new Dell PowerEdge server features for immersion cooling. With Dell’s PowerEdge Smart Cooling features, including symmetric layout and streamlined flow paths for immersion liquid, systems can be cooled more effectively. With coolant flowing efficiently around the hottest components, energy savings of up to 30% can be achieved over traditional perimeter air cooling for comparable servers and configurations, according to internal Dell modeling.

Source: dell.com

Tuesday, 13 February 2024

Leading the Way Through Data Protection Industry Changes

Leading the Way Through Data Protection Industry Changes

The data protection landscape shifted quite dramatically with the recent announcement that Cohesity would be acquiring its much larger rival, Veritas Software. Acquisitions of this size in the data protection market space are somewhat unusual, particularly when the acquiring entity is significantly smaller in both revenue and customer base than the acquired company. Bringing two technology organizations together with different cultures and overlapping product offerings is certainly going to be challenging—particularly since the stated intent is to continue supporting all the enterprise solution offerings in Veritas’ portfolio for years to come and “leave no customer behind.” In addition, they will try to do this while integrating the best capabilities of the collective product offerings from both companies. To be sure, these are laudable goals; however, it will be a delicate balancing act.

Dell Technologies has been on a similar, albeit less chaotic, journey for the past five years. In 2019, we released our modern cloud-native data protection platform, PowerProtect Data Manager. Since its release, we have been helping our customers make the transition from their traditional Dell data protection software solutions to Data Manager. Its growth has been impressive. In this last calendar year alone, Data Manager adoption has increased over 100% and has quietly vaulted into a #1 position from a customer satisfaction NPS perspective. In fact, it was chosen as the leader in innovation, scalability and operational simplicity, and is preferred by more IT decision makers than Rubrik, Cohesity, Veeam, Commvault and Veritas. PowerProtect Data Manager is available as standalone software or as an integrated appliance.

In addition to delivering innovative feature enhancements like Transparent Snapshots and Dynamic NAS Protection, Data Manager also provides potent cyber resiliency capabilities through its tight integration with PowerProtect Cyber Recovery. The foundation of Data Manager’s cyber resilient multicloud data protection capabilities resides in our PowerProtect Data Domain appliances. In fact, many of our customers using Veritas NetBackup have been relying on Data Domain appliances for decades to protect and secure their critical business data on-premises and in the public cloud. NetBackup users can efficiently move, manage, protect and recover their data anywhere it resides across edge, core and multicloud infrastructure using our appliance offerings. And many of these same customers are using our appliances to create an isolated, digital vault with immutability and intelligence to ensure they can recover from cyberattacks.

In these uncertain times, it’s critical for our customers to have confidence in their data protection and cyber resiliency infrastructure. They need solutions with a proven track record and a promising future. Dell is committed to helping our customers address their data management challenges, both now and in the future. Our solutions offer the operational simplicity, resilience, efficiency and innovation required to navigate the complexities of the digital era seamlessly.

For our valued customers currently leveraging Veritas as their backup solution, Dell extends an invitation to engage in a discussion regarding PowerProtect Data Manager. We are eager to explore how we can assist in transforming your backup infrastructure into a cost-effective and exceptionally reliable modern data protection environment, mitigating risks from human error, natural disasters and cyber threats.

Source: dell.com

Saturday, 10 February 2024

Empowering Developers with Meta Code Llama 70B Model

Empowering Developers with Meta Code Llama 70B Model

It is great to be working with Meta as they roll out the 70 billion parameter versions of their three Code Llama models to the open-source community. This is another significant step forward in extending the availability of cost-effective AI models to our Dell Technologies customers.

Code assistant large language models (LLMs) offer several benefits for code efficiency, such as enhanced code quality, increased productivity and support for complex codebases. Moreover, deploying an open-source LLM on-premises gives organizations full control over their data and ensures compliance with privacy regulations at the same time, reducing latency and controlling costs.

Meta has introduced their latest open-source code generation AI model built on Llama 2—the 70 billion parameter versions of the Code Llama models. Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. Meta has shown that these new 70B models improve the quality of output produced when compared to the output from the smaller models of the series.

The Code Llama 70B models, listed below, are free for research and commercial use under the same license as Llama 2:

  • Code Llama – 70B (pre-trained model)
  • Code Llama – 70B – Python (pre-trained model specific for Python)
  • Code Llama – 70B – Instruct (fine-tuned)

Dell PowerEdge XE9680 with NVIDIA H100 GPUs: A Powerhouse Solution for Generative AI


Dell Technologies continues its collaboration with Meta, providing the robust infrastructure required to support the deployment and utilization of these large language models. Dell Servers, such as the PowerEdge XE9680, an AI powerhouse equipped with eight NVIDIA H100 Tensor Core GPUs, are optimized to handle the computational demands of running large models such as Code Llama, while delivering maximum processing power, ensuring smooth and efficient execution of complex algorithms and tasks. Llama 2 is tested and verified on the Dell Validated Design for inferencing and model customization. With fully documented deployment and configuration guidance, organizations can get their generative AI (GenAI) infrastructure up and running quickly.

With Code Llama 70B models, developers now have access to tools that significantly enhance the quality of output, thereby driving productivity in professional software development. These advanced models excel in various tasks, including code generation, code completion, infilling, instruction-based code generation and debugging.

Use Cases


The Code Llama models offer a plethora of use cases that elevate software development, including:

  • Code completion. Streamlining the coding process by suggesting code snippets and completing partially written code segments, enhancing efficiency and accuracy.
  • Infilling. Addressing gaps in codebase quickly and efficiently, ensuring smooth execution of applications and minimizing development time.
  • Instruction-based code generation. Simplifying the coding process by generating code directly from natural language instructions, reducing the barrier to entry for novice programmers and expediting development.
  • Debugging. Identifying and resolving bugs in code by analyzing error messages and suggesting potential fixes based on contextual information, improving code quality and reliability.

As the partnership between Dell and Meta continues to evolve, the potential for innovation and advancement in professional software development is limitless. We are currently testing the new Code Llama 70B on Dell servers and look forward to publishing performance metrics, including the tokens per second, memory and power usage with comprehensive benchmarks in the coming weeks. These open-source models also present the opportunity for custom fine-tuning targeted to specific datasets and use cases. We are actively engaging with our customer community in exploring the possibilities of targeted fine-tuning of the Code Llama models.

Get Started with the Dell Accelerator Workshop for Generative AI


Dell Technologies offers guidance on GenAI target use cases, data management requirements, operational skills and processes. Our services experts work with your team to share our point of view on GenAI and help your team define the key opportunities, challenges and priorities.

Source: dell.com

Thursday, 8 February 2024

Level Up Your Multicloud Experience Through Accelerated Automation Workflows

Level Up Your Multicloud Experience Through Accelerated Automation Workflows

Organizations are quickly modifying their IT operations and processes through automation and growing mature DevOps synergies. They are learning how to upskill their traditional IT admins to increase efficiencies and accelerate business outcomes. Automation increases the workflows for an organization across their Continuous Integration | Continuous Delivery (CI|CD) processes and contributes to DevOps maturity. Automation also helps organizations develop “product first” mindsets and become more competitive due to less wasteful maintenance cycles and delayed IT moratoria.

Dell Technologies partnered with and released Hashicorp’s Terraform providers for our primary storage portfolio to include offers such as PowerFlex, PowerStore and PowerScale to help customers accelerate their platform operations. Today, we add to the catalog the first Terraform Provider for Dell APEX Navigator specifically, with Dell APEX Block Storage for AWS.

This follows the initial release of APEX Navigator for Multicloud Storage, a set of capabilities that streamlines multicloud storage management, accelerates productivity and fortifies multicloud operations, all from one centralized location. These capabilities help our customers take full advantage of the high-performance, unique multi-AZ resiliency and cost optimization offered by APEX Block Storage for AWS. The combination of centralized management and the ability to create automation in early discovery provides the end user with an advantage to gain excellent purview of their SaaS environment. Additionally, APEX Navigator was intentionally built API-first to simplify integration with popular automation tools such as Terraform. Dell’s API-first strategy provides customers a consistent experience—making it easier and faster to consume across Dell products.

Level Up Your Multicloud Experience Through Accelerated Automation Workflows
Click here to access the partner registry

Incorporating automation tools for infrastructure enhances multicloud expansion with fluidity and scalability. Terraform provides an easier way for end users to set up and expediate automation for efficiency and scale across their storage portfolio without having to worry about the underlying procedural, manual steps that can delay target outcomes such as provisioning or standing up a volume. Additional benefits include reduced risk of overbilling, consistency of data and consistency through application changes and movement without disruption.

In this release, the following management aspects in APEX Navigator will be available through the Terraform Provider:

  • Deployment of PowerFlex clusters from AWS marketplace and/or APEX Block Storage for AWS, provisioning of storage volumes
  • Data mobility between APEX Block Storage for AWS and Dell PowerFlex on-premises
  • Decommissioning of instances to reduce risk, such as the removal of a cluster from AWS, and deleting storage volumes

Accelerate time to value for Dell APEX Block Storage for AWS and Dell APEX Navigator for Multicloud Storage with this first set of capabilities that break down siloed experiences with centralized management of Dell storage software across multiple public clouds. Learn more about Dell APEX Navigator and how to get started using all these great features today with a risk-free 90-day evaluation.

Source: dell.com

Tuesday, 6 February 2024

Addressing the Environmental Footprint while Optimizing the Handprint of AI

Addressing the Environmental Footprint while Optimizing the Handprint of AI

Artificial Intelligence (AI) and generative AI (GenAI) offer a transformational promise to propel essentially all industries and the global economy forward. Data is essential to the learning and decision-making power of AI and, as such, demand on data processing is expected to grow significantly. AI can quickly and efficiently draw insights from enormous data sets, which can require immense compute power, making data center and PC performance critical.

Challenges come with any breakthrough technology, and the environmental footprint of AI is already a topic the industry is working to address. Training and running large AI models and workloads rely on energy and resources, which presents a difficult decision for businesses who seek to embrace this revolutionary technology and meet their environmental sustainability commitments. From our vantage point, neither of these commitments are slowing down.

In May 2023, Gartner® stated in its report “most CEOs (94%) will increase or hold sustainability and ESG investments at similar levels to 2022.” Then the spotlight shifted to AI and its enormous potential to drive efficiency in organizations. Dell Technologies research found that currently, 76% of IT decision-makers plan to increase their budgets to support GenAI use cases and 78% are excited about how an investment in AI can benefit their organizations.

These findings appear to be at odds with one another – companies are simultaneously increasing investment in sustainability and in an energy-intensive technology. But sustainability and AI does not have to be “either/or” decision. In fact, technological progress is a prerequisite for companies seeking to meet ambitious climate goals. The best innovations can – and should – do both: advance our technological capacity while supporting more energy-efficient and sustainable futures.

Navigating “Both/And”


There are smart and sustainable technology investments and practices to reduce the environmental footprint of AI, while, at the same time, allowing us to leverage AI to solve some of the world’s biggest challenges. Sustainability will be integral to the success of AI technology and vice versa.

While AI requires significant compute power, it currently represents a small fraction of IT’s global energy consumption. We expect this will change as more companies, governments and organizations harness AI to drive efficiency and productivity across their operations and teams.

To manage and even offset AI’s growing carbon footprint, greater control over data center energy consumption is increasingly becoming a top priority. According to IDC, the number one sustainability priority for IT planning and procurement among IT decision-makers is reducing data center energy consumption. Practical solutions can help make this priority a reality:

  • Use energy-efficient, sustainable technology. Minimize AI’s carbon footprint through modern, energy-efficient servers and storage devices and environmentally responsible cooling methods, while powering data centers with renewable energy. Use PCs and other hardware that deliver energy efficiency and include sustainable materials like recycled, ocean-bound or bio-based plastics, low-carbon emissions aluminum, closed-loop materials, and recycled packaging.
  • Right-size AI workloads and data center economics. While some organizations will benefit from larger, general purpose large language models (LLMs), many organizations only require domain- or enterprise-specific implementations. Right-sizing compute requirements and infrastructure can support greater data center efficiency. And, flexible “pay as you go” spending models can help organizations save on data center costs while supporting sustainable IT infrastructure.
  • Recognize the power of local computing. Along the lines of right-sizing AI workloads, local computing will play an important role in prototyping, developing, fine-tuning and inferencing GenAI models. Running complex AI workloads locally on AI-enabled PCs has sustainability advantages as well as other benefits, including cost effectiveness, improved security and reduced latency.
  • Responsibly retire inefficient hardware. Optimize data center performance and energy consumption by returning or recycling technology. Many programs harvest components and materials to be reused, refurbished and recycled, which reduces e-waste and keeps recycled materials in use longer. Likewise, end-of-life PCs, monitors and accessories can be returned for refurbishment or recycling to keep materials in the circular economy for longer, reducing the need to develop new materials.
  • Apply AI to find efficiencies. Within data center operations, use AI to track and analyze data to improve monitoring and workload placement. This can help optimize efficiency, right-size workloads and reduce energy costs.

Leading by Example


Data center energy use, emissions and e-waste are serious issues the industry is addressing head on. When approached mindfully, AI infrastructure development can provide a path to more sustainable operations. Recognizing that technology has an important role in addressing environmental challenges will help our industry collectively harness the tremendous potential for AI to support climate-related solutions. We should all work towards modernizing technology and modeling the “both/and” benefits of sustainability and AI.

Source: dell.com

Saturday, 3 February 2024

Demystifying Common Myths About Private Wireless Networks

Demystifying Common Myths About Private Wireless Networks

Enterprises are beginning to grasp the importance of private wireless networks (PWN) as they explore use cases key to their business transformation. However, many misconceptions and myths associated with the functionality and applicability of private wireless networks continue to distract. This blog post demystifies six common myths to help enterprises understand the true state of the technology and guide them to take the right approach amid all the hype and excitement around private wireless networks.

Myth #1: Private Wireless Networks Need all Advanced 3GPP Features


Enterprises may not need advanced 3GPP features to the same extent as commercial public networks. Organizations can achieve equivalent performance—as enabled by advanced 3GPP features—in private networks through custom system design and configuration. For instance, most of the performance benefits of ultra reliable low latency communications (URLLC) are realizable by bringing user-plane function closer to the user-device. Similarly, network slicing is less necessary since private network resources and components are dedicated to an enterprise, and the performance characteristics for a specific workload are managed without the complication and overhead of network slicing.

Myth #2: Private Wireless Networks Will Replace Wi-Fi


Wi-Fi is a well-entrenched IT networking technology with significant advantages of cost and a known operational model. However, enterprises are discovering that Wi-Fi is not well-suited to address the needs of deployments requiring wide-area coverage with emobility or operational technology (OT) with stringent SLAs or under high system load. These happen to be precisely the areas where private wireless systems shine. As such, we see Wi-Fi will continue to serve in its traditional role and the WLAN, while private wireless networks will provide reliable service to enable operational outcomes that Wi-Fi cannot.

Myth #3: Business Outcomes Are Realized Only by Standalone, On-prem Private Wireless Networks


The connectivity solution needed to deliver business outcomes primarily depends on enterprise requirements, network topology of public MNO and location of private application. If the enterprise outcomes require localized connectivity with stringent performance requirements and maximum control to manage Quality of Service (QoS), a self-contained, isolated private wireless network in a box makes the most sense. On the other hand, if connectivity requirements are a bit relaxed and data sovereignty is the primary driver, organizations can implement a user-traffic offload via a user plane function (UPF) on the premises. If the use case is basic connectivity with a “look-and-feel” of a private network, it could simply leverage MNO network resources to route enterprise traffic to private wireless applications. If the enterprise requires roaming, wide-area coverage, fallback or redundant coverage, it could interconnect its private network with public MNO network in hybrid configuration.

Myth #4: Private Wireless Networks Must be Based on 5G


Private wireless networks can be based on 5G, as well as on 4G. In fact, most of the private wireless networks built to date are based on 4G due to the age of technology and proliferation of 4G commercial and IoT devices. However, with the growth in 5G-based chipsets, devices and the 5G ecosystem, more private wireless networks will be deployed using 5G. The choice of 5G or 4G depends primarily on the performance requirements and user device compatibility. While it is true 5G can offer higher speeds and better reliability and latency than 4G, if 4G can meet the business outcome, investing in 5G may not offer the best bang for the buck.

Myth #5: CBRS General Authorized Access (GAA) Spectrum is Not Good Enough


Enterprises can potentially use the entire CBRS GAA spectrum of 150 MHz, as long as it does not interfere with other CBRS users in the vicinity. This holds true in deployments that are indoors or even outdoors in isolated facilities. In a rare event, if interference is detected in higher tier users, SAS (Spectrum Access System) addresses that through interference mitigation techniques, such as transmit power reduction or coverage redesign. Moreover, plans are afoot to move Tier-1 users, including the Department of Defense, the U.S. Navy and the Fixed Satellite Services (FSS), to alternative spectrum bands which will further improve availability of CBRS GAA spectrum. Consequently, the option of building private networks with CBRS GAA spectrum cannot be disregarded for PWN.

Myth #6: Private Wireless Networks Require Edge Computing and Must Replace Public Cloud


On-prem edge and public clouds work in tandem with different applications and workloads running in different locations. On-prem edge computing with a tightly integrated private wireless network enable  sensitive use-cases requiring sub milliseconds of round-trip delay. Data is processed locally on-prem and actions are taken immediately by deploying application workloads on-prem on an edge compute infrastructure. On the other side of the architecture continuum, public cloud can offload edge location by hosting application workloads not requiring on-prem processing. These could be data storage, audit, analytics and AI/ML model training from data collected from edge devices.

In a nutshell, the goal of private wireless networks should be to address enterprise needs. Dell Technologies is well-positioned to help enterprises uncover misconceptions about private networks. We can combine product, service and partner capabilities into compelling solution offerings to meet enterprise business outcomes in their journey towards digital transformation.

Source: dell.com

Thursday, 1 February 2024

Accelerating GenAI with Dell APEX Flexibility

Seize the Potential


Generative AI (GenAI) is poised to transform every industry and is advancing rapidly. This tremendous opportunity also comes with challenges. Organizations harnessing its benefits will need to address skill gaps, infrastructure investment and transforming business models.

Accelerating GenAI with Dell APEX Flexibility

Business leaders expect to run as much as 71% of their GenAI workloads in hybrid and private clouds. These clouds span diverse on-premises environments, such as edge locations, colocation facilities and corporate data centers. This preference is not surprising given leaders are prioritizing cost control, customization, security and data integrity. Meeting these demands requires agile infrastructure for testing and scaling new GenAI initiatives, as well as quick access to emerging technology to prevent technical debt.

We are committed to evolve and meet these demands with you. The latest Dell Technologies Validated Designs for GenAI are available via Dell APEX subscriptions. These designs enable you to rapidly adopt recent technology on-premises, bringing GenAI to your data without extensive upfront investment. 

Overcome the Challenges


Most new GenAI projects fail at an estimated rate of 85%. This is due to factors such as deficient infrastructure, skill gaps, lack of customization and data security and management issues—a truly daunting statistic for IT teams tasked to gain competitive advantage with GenAI.

If the risk of failure is so high, why does it seem that every business has a plan to implement GenAI? The answer is that more than 70% of organizations report seeing value from GenAI within just three months.

Key to starting (or continuing) your GenAI journey involves finding prime use cases and selecting solutions that directly address key challenges. Dell Validated Designs for GenAI improve your success rate by mitigating common pitfalls with pre-tested hardware and software solutions. These designs ensure your infrastructure is robust, secure and customized for your workloads.  

Control the Cost


Complementing the Validated Designs, Dell APEX enables you to subscribe to Dell-owned infrastructure tailored for your unique GenAI requirements. This model allows you to only pay for what you use, aligning financial and operational needs as technology evolves. Dell APEX gives you the freedom to innovate with reduced financial risks and avoids the potential for unpredictable costs associated with public cloud services that can vary based on the volume of data processing and storage.

Upfront cost is typically one of the biggest barriers to entry. Infrastructure like GPUs, servers and storage represent substantial investments and typically run on set refresh cycles. This forces businesses to ask, what comes first—the investment or the benefit? The Dell APEX subscription approach can help by aligning value and usage, spreading costs over time to ensure a cost-effective and flexible pathway to harnessing the power of AI. 

Access the Latest Technology


As GenAI continues to fundamentally transform the business environment, it’s also driving greater infrastructure needs. This creates a challenge for businesses working to deploy effective solutions in competitive and changing landscapes while promoting ongoing improvement and innovation. How do IT teams avoid technical debt amidst this incredible growth?

By subscribing to infrastructure, customers can sidestep the obsolete technology and financial burdens associated with owning aging hardware. APEX ensures businesses are not locked into technology. At the end of each term, they have the flexibility to upgrade and embrace the next wave of innovation as it emerges. This aligns with the evolving needs of GenAI, allowing companies to stay in control of their data, security, insights and competitive advantage while operating at the forefront of technological advancement. 

Mitigate the Risk


The excitement surrounding the possibilities for GenAI have led to the emergence of Shadow AI as lines of business implement solutions without centralized oversight. This unregulated adoption of GenAI leads to myriad challenges, such as unaccounted costs, security vulnerabilities and scalability issues. Dell Validated Designs for GenAI and APEX provide a structured, scalable and secure infrastructure helping to centralize deployments to align with organizational strategies. 

Move Forward with Confidence


Some organizations are new to GenAI, while others have been on this journey for a while. Dell offers expert services to meet you where you are, aligning your data and infrastructure with your business goals. Whether you’re starting with prioritization and roadmap development, or advancing to data management for implementation, we’re here to assist. And if you need help keeping your projects on track and productive, we have ongoing management and training.

The entire GenAI portfolio can become part of your infrastructure through Dell APEX. This includes PowerEdge servers for AI, storage for AI and Precision workstations, along with a continuously expanding library of solutions tailored for training, inferencing and more. So, whether it’s steering clear of technical debt, scaling up to meet demand or keeping your AI initiatives secure and aligned with your business objectives, Dell Technologies Validated Designs with APEX subscriptions have got you covered. Head over to Dell APEX for AI to find out how we can help you unlock the full potential of Generative AI.

Source: dell.com