Saturday 27 April 2024

Embracing the Chaos: Database Resiliency Engineering

Embracing the Chaos: Database Resiliency Engineering

Ensuring the stability and reliability of servers is critical in today’s digital-focused business landscape. However, the complexity of modern infrastructures and the unpredictability of real-world scenarios pose a significant challenge to engineers. Imagine a critical system for sales crashing unexpectedly due to a surge in user traffic, leaving customers stranded and business teams reeling from the aftermath.

The traditional performance testing on servers often falls short of uncovering vulnerabilities lurking deep within complex infrastructures. While server tasks may perform optimally under controlled testing conditions, they may fail when faced with the unpredictability of everyday operations. Sudden spikes in user activity, network failures or software glitches can trigger system outages, resulting in downtime, revenue loss and damage to brand reputation.

Embracing the Problem as a Solution


This is where the Database Resiliency Engineering product emerges as an unconventional solution, offering a proactive approach to identifying weaknesses and mitigating vulnerabilities for Dell’s production and non-production servers via chaos experiments tailored for database systems. Much like stress-testing a bridge to ensure it can withstand the weight of heavy traffic, the experiment deliberately exposes servers to controlled instances of chaos, simulating abnormal conditions and outage scenarios to understand their strengths and vulnerabilities.

Imagine a scenario where a bridge is constructed without undergoing stress-testing. Everything seems fine during normal use, until one day an unusually heavy load, such as a convoy of trucks or a sudden natural disaster puts immense pressure on the bridge. Its hidden structural weaknesses become apparent. Luckily for us, most infrastructure undergoes rigorous testing before being open to the public. Similarly, Dell’s Chaos Experiment tool allows us to test the boundaries of our server, allowing us to identify potential weaknesses and reinforce critical areas proactively.

Calculated Approach to Unleashing Chaos


Performing a chaos experiment is not as simple as unleashing mayhem on our systems and watching it unfold. The goal is to fortify our systems with iterative improvements that take a multi-step approach starting with a defined hypothesis, carefully executed chaos scenarios and a comprehensive improvement plan based on server responses.

Embracing the Chaos: Database Resiliency Engineering

Each test begins with understanding the server’s steady state, its baseline performance under optimal conditions. This is our starting point, providing a reference against which we measure the impacts of the experiments. Our database engineers will then develop hypotheses about potential weak spots, which serve as the guide to the various attacks they will perform on the server. With a click of a button, the tool introduces the selected disruptions on the server as our monitors carefully track its response every step of the way.

Chaos takes many forms—from resource consumption to network disruptions. The tool allows us to manipulate these variables, simulating real-world chaos scenarios. While the experiment is running, we keep an eye on the system behavior as we track the monitors in place, review the incoming logs and take note of any deviations from the expected. With these insights, we can form improvement plans for enhancing system resilience, optimizing server resource allocation and fortifying against potential vulnerabilities.

Understanding Chaos’s Many Faces


The tool enables us to perform three different types of experiments, each of which allow us to alter different variables or situations:

◉ Resource consumption. The number of resources that are utilized when running an operation impacts a server’s performance. By intentionally increasing resource consumption, such as memory or CPU utilization, we can test the performance and responsiveness of critical processes. Ramping up CPU utilization may lead to increased processing times for requests, while elevated memory usage could result in slower data retrieval or system crashes.

◉ System states. Just as the weather outside can change in an instant, our servers can experience sudden changes in the system environment causing unexpected behaviors. A Time Travel test alters the clock time on servers, disrupting scheduled tasks or triggering unwanted processes. A Process Killer experiment overloads targeted processes with repeated signals, simulating scenarios where certain processes become unresponsive or fail under stress.

◉ Network conditions. Stable communication between components is vital for server operations to perform optimally. Altering network conditions allows us to learn how the system responds to different communication challenges. A Blackhole test deliberately shuts off communications between components, simulating network failures or isolation scenarios. A Latency test introduces delays between components, mimicking high network congestion or degraded connectivity.

Building Resilience in a World of Uncertainties


The continuous cycle of testing, discovering and enhancing enables us to iteratively improve our capacity to withstand potential disruptions with each experiment. By addressing infrastructure vulnerabilities before they escalate into costly incidents, we prevent millions in potential revenue loss and allow our team members to focus more time on modernization activities instead of incident resolution tasks. Moreover, it instills confidence in our teams knowing their infrastructure has been tried and tested for any challenges that may arise.

Embracing chaos as a solution underscores our understanding that chaos is not the end, but a means to achieving a stronger, more resilient infrastructure environment. Instead of reacting to the world’s unpredictability, we are bolstering our ability to adapt and thrive in an ever-evolving digital landscape.

Source: dell.com

Thursday 25 April 2024

Build the Future of AI with Meta Llama 3

Build the Future of AI with Meta Llama 3

Open-source AI projects have democratized access to cutting-edge AI technologies, fostering collaboration and driving rapid progress in the field. Meta’s announcement of the release of Meta Llama 3 models marks a significant advancement in the open-source AI foundation model space. Llama 3 is an accessible, open large language model (LLM) designed for developers, researchers and businesses to build, experiment and responsibly scale their generative AI ideas. These latest generation LLMs build upon the success of the Meta Llama 2 models, offering improvements in performance, accuracy and capabilities.

Key advancements in Llama 3 include enhancements in post-training procedures, aimed at improving capabilities such as reasoning, code generation and following instructions. Additionally, improvements in the model architecture, such as an increased vocabulary size and a greatly improved tokenizer, enable more efficient language encoding. The input token context size has also been increased from 4K to 8K, benefiting use cases with large input tokens, such as RAG (retrieval-augmented generation).

Use Cases


Currently, four variants of Llama 3 models are available, including 8B and 70B parameter size models in pre-trained and instruction-tuned versions. Enterprises can leverage the open distribution and commercially permissive license of Llama models to deploy these models on-premises for a wide range of use cases, including chatbots, customer assistance, code generation and document creation.

Build the Future of AI with Meta Llama 3

Dell PowerEdge and Meta Llama models: A Powerhouse Solution for Generative AI


At Dell Technologies, we are very excited about our continued collaboration with Meta and the advancements in the open-source model ecosystem. By providing the robust infrastructure required to support the deployment and utilization of these large language models, Dell is committed to making it easier for customers to deploy LLMs on-premises through Dell Validated Designs for AI. We provide the optimal end-to-end integrated infrastructure solutions to fine tune and deploy these models within our customers’ own IT environment without having to send sensitive data to the cloud or having to resort to using costly proprietary models and a closed ecosystem.

Dell’s engineers have been actively working with Meta to deploy the Llama 3 models on Dell’s compute platforms, including the PowerEdge XE9680, XE8640 and R760XA, leveraging a mix of GPU models. Since Llama 3 models are based on a standard decoder-only transformer architecture, they can be seamlessly integrated into customers’ existing software infrastructure, including inference frameworks such as TGI, vLLM or TensorRT-LLM.

In the coming weeks, Dell will provide test results, performance data and deployment recipes showcasing how easy it is to deploy Llama 3 models on Dell infrastructure and how the performance compares to Llama 2 models. This ongoing collaboration between Dell and Meta underscores the commitment to advancing the open-source AI ecosystem through community-driven innovation and empowering enterprises to harness the power of AI within their own IT environments.

Get Started with the Dell Accelerator Workshop for Generative AI


Dell Technologies offers guidance on AI target use cases, data management requirements, operational skills and processes. Our services experts work with your team to share our point of view on AI and help your team define the key opportunities, challenges and priorities.

Source: dell.com

Tuesday 23 April 2024

AI Literacy – A Foundational Building Block to Digital Equity?

AI Literacy – A Foundational Building Block to Digital Equity?

I have a very strong memory of my mother teaching me how to spell “Wednesday.” She was busy packing lunches in preparation for a normal elementary school day, and she called out, “It starts out W-E-D, like wedding. It’s tricky.” I spelled it correctly in my spelling quiz that day, and to this day, I recall her tip when I write the word.

On National AI Literacy Day, this simple memory makes me think of the value of passing foundational skills through generations. Is the world ready to pass around foundational AI skills?

AI has the potential to create solutions for society’s biggest challenges, but it also has the potential to cause another digital divide or further exacerbate the one we already have. Without widespread AI access, adoption and literacy—from knowing how to input effective prompts to understanding the tools available—we’re limiting access to the opportunities AI can offer and increasing the gap between technology’s haves and have-nots.

Dell Technologies has been working with The National Digital Inclusion Alliance (NDIA) to get in front of this issue. Together, we’re encouraging conversation about the use of AI tools, how to expand access to them, how AI could be used to increase access to technology and how all these activities could result in a diversity of voices influencing AI development and inclusive global policymaking.

Understanding societal needs around AI is critical. Here are some of our observations and areas of focus.

Equitable Access to Devices and Internet Connectivity is Still a Baseline Need


Easy access to technology such as devices and high-speed Internet connectivity is a foundational need to access the benefits of our digital society. Since access to these basic tools is not equitable today, access to generative AI (GenAI) is also not equitable. This inequality is being recognized by governments around the world, including G7 ministers who have committed to collaborate with others to enhance local AI digital ecosystems.

By continuing to build and deploy digital inclusion programs focused on improving access to technology and the skills to use it, particularly in underrepresented communities, there will be more diverse people with familiarity and understanding of new technology like GenAI. This understanding of and access to the tools, along with inclusive regulatory design and deployment, is a necessary component of building safer and less biased GenAI tools.

Digital Skills Training Programs Are Essential for People to Safely and Confidently Use AI


Teaching how to safely use AI is a natural component of digital inclusion skills programs. If a digital inclusion practitioner is teaching someone how to use a browser, they can also teach an individual how to use GenAI tools like ChatGPT or Microsoft’s Copilot. Just as digital inclusion instructors teach community members about Internet security and how to be safe by identifying bogus websites and phishing, they’ll also play an essential role in teaching others how to verify content provided by GenAI tools. Bridging a widening skills gap is also critical to prepare a diverse workforce for the future. It is important that workforce development programs across the public and private sectors incorporate a basic AI curriculum, due to its essential value for our present and future. 

Communities Need Trusted Experts to Help People Access Digital Services


The most effective way of bringing more people online is to connect them with a trusted person who can guide them through the process. AI tools can accelerate the work of these trusted individuals. However, these digital navigators—experts from the digital inclusion community— need to be part of the whole process. For example, Dell supports Drexel’s ExCITe Center, which assists community members in connecting to free or low-cost Internet services and provides computer maintenance and refurbishing services and training to the community. Future tools that can accelerate the ExCITe’s center’s mission could be backed by AI technology, but their efficacy and success will depend on the trusted community member at the center of its design and deployment.

“As the NDIA community, we know that technology alone will not solve the digital divide. Humans are essential to digital inclusion, to help introduce emerging technologies and guide the use of new technologies.”

– Angela Siefer, Executive Director, National Digital Inclusion Alliance (NDIA)

At NDIA’s Net Inclusion Conference in February 2024, Dell Technologies built the AI Discovery Hub, powered by Dell equipment. Young adults from Hopeworks, a nonprofit organization, became AI digital guides for the day and ran personalized demos of GenAI productivity-centric tools. Feedback was extremely positive: “I really enjoyed that Dell showcased a non-profit doing digital navigator work specifically with AI to help users better understand how AI can benefit them. This is something that has been asked of us from our community. Dell’s AI Hub brought education and ease to what can be a scary topic to many in the digital inclusion space.”

AI Literacy to Empower All


My mother’s tip for remembering how to spell “Wednesday” forms part of one small brick in the foundation of my overall literacy skills, much like understanding AI provides a crucial link for navigating the rapidly evolving digital world.

By focusing on digital skills training, trustworthy digital navigation and equitable access to technology, organizations like the NDIA and Dell Technologies are paving the way for a future where AI can be a tool for empowerment for all. GenAI could help millions of people leapfrog stages of their digital journey—and everyone deserves to jump off the same foundation.

Source: dell.com

Saturday 20 April 2024

Scale Up or Out with AMD-Powered Servers

Scale Up or Out with AMD-Powered Servers

Remember when big data was something only very big companies worried about? Or when artificial intelligence was science fiction rather than sound business strategy? Today, the majority of enterprises are using AI and big data to innovate and maintain their competitive advantage—and that means more pressure on IT teams to beef up their data center, cloud and edge processing capabilities.

Big data and real-time AI consume a lot of processing power, which typically means adding more servers (and finding more floorspace to store them), consuming more energy and spending more time managing hardware. The latest generation of Dell PowerEdge servers featuring AMD’s fourth-generation EPYC processor technology is designed to change all that. Small, energy-efficient, scalable and simple to manage, the new AMD-equipped PowerEdge servers are the right fit for enterprises that need to balance scalability with sustainability.

Better Scalability, Sustainability and Performance


You would expect higher performance from the latest generation of AMD-powered servers from Dell Technologies. But better performance is just the beginning. Dell and AMD have worked together to make their latest generation of PowerEdge servers more efficient through Smart Cooling technology, more secure through a silicon-based root of trust and simpler to manage through new infrastructure automation tools. Here are some of the new features you’ll find in the latest generation of AMD-equipped PowerEdge servers.

  • 2X higher performance. The new AMD EPYC processors deliver up to 107% higher processing performance and up to 33% more storage capacity than their predecessor. That leads to faster business insights and improved application performance across the board.
  • Denser and more efficient. The fourth-generation AMD EPYC processors deliver 50% more density in the core than previous generations, resulting in 47% higher performance per watt for better energy efficiency. In addition, Dell’s Smart Cooling technology reduces the server’s energy consumption through improved airflow and optimized cooling features.
  • Easier to manage. Dell’s management tools, such as OpenManage Enterprise Power Manager and integrated Dell Remote Access Controller (iDRAC), make it easier than ever before to manage and automate bare metal servers, from correcting configuration drifts to optimizing energy usage.
  • More secure. PowerEdge servers feature a silicon-based root of trust that protects against external attacks. AMD EPYC processors also feature Infinity Guard, which helps reduce the threat surface of the server during operation.

Flexible Configurations for a Broad Range of Applications


As more business applications move into the cloud and out toward the edge, scalability becomes critically important—both the ability to scale up and scale out. Dell PowerEdge servers with AMD technology are available in one- and two-socket configurations to handle a variety of different workloads. Our single-socket, 1U PowerEdge R6615 servers are ideal for applications that require maximum performance with a minimal footprint. For more compute-intensive applications such as artificial intelligence and real-time processing of large data sets, two-socket, 2U PowerEdge R7615 servers provide a high-performance platform that can easily scale to handle your most demanding applications.

PowerEdge servers with AMD EPYC processors are designed to handle big jobs, but what’s most impressive about them is what’s not big about them. You’ll have smaller energy bills because they consume less energy thanks to improved density in the design and innovative cooling solutions, as in the case of the PowerEdge C6615. With OpenManage Power Manager, you can also reduce the size of your carbon footprint by managing your power consumption for optimal efficiency. You can fit our AMD-powered servers into existing footprints without worrying about overheating.

Whether you’re upgrading your legacy servers or looking for a scalable foundation for new applications, PowerEdge servers featuring fourth-generation AMD EPYC processors deliver big performance in a more energy-efficient chassis. It’s the best of both worlds—scalability and sustainability.

Source: dell.com

Thursday 18 April 2024

Experience Choice in AI with Dell PowerEdge and Intel Gaudi3

Experience Choice in AI with Dell PowerEdge and Intel Gaudi3

The Dell PowerEdge XE9680 server has evolved to become a key player in the acceleration of AI and generative AI, machine learning, deep learning training and HPC modeling. The introduction of Intel® Gaudi® 3 AI Accelerator into this server lineup marks a significant advancement, offering an enhanced suite of technical capabilities designed to meet the needs of complex, data-intensive workloads. This evolution caters to a more diversified range of workloads, providing developers and enterprise professionals with options for pushing the limits of GenAI acceleration.

Harnessing Silicon Diversity for Customized Solutions


The Dell PowerEdge XE9680 distinguishes itself by being Dell’s pioneering platform integrating eight-way GPU acceleration with x86 server architecture, renowned for delivering unparalleled performance in AI-centric operations. The incorporation of the Intel Gaudi3 accelerator into this ecosystem further amplifies its capabilities, offering clients a choice to tailor their systems to specific computational needs, particularly associated with GenAI workloads. This strategic addition underscores a commitment to delivering no-compromise AI acceleration solutions that are both versatile and powerful.

Technical Specifications Driving Customer Success


With an architecture designed to thrive in up to 35°C environments, the XE9680 promotes reliability and scalability. The addition of Intel Gaudi3 accelerators enriches the server’s configuration possibilities. This includes up to 32 DDR5 memory DIMM slots for enhanced data throughput, 16 EDSFF3 storage drives for superior data storage solutions and eight PCIe Gen 5.0 slots for expanded connectivity and bandwidth. Coupled with dual 4th Generation Intel® Xeon® Scalable processors boasting up to 56 cores, the XE9680 is engineered to excel in executing demanding AI and ML workloads, providing a competitive edge in data processing and analysis.

Strategic Advancements for AI Insights


The PowerEdge XE9680, with the integration of additional accelerators, rises above conventional hardware capabilities, serving as a critical asset for businesses aiming to leverage AI for deep data insights. This combination of advanced processing power and efficient, air-cooled design sets a new benchmark in AI acceleration, delivering rapid, actionable insights to drive business outcomes.

Technological Openness Fostering Innovation


The Intel Gaudi3 AI accelerator brings to the table performance features vital for generative AI workloads, including 64 custom and programmable tensor processor cores (TPCs) and 128 GB of HBMe2 memory capacity, offering 3.7 TB of memory bandwidth, and 96 MB of on board static random-access memory (SRAM).  The Gaudi3’s open ecosystem is optimized through partnerships and supported by a robust framework of model libraries. Its development tools simplify the transition for existing codebases, reducing migration to a mere handful of code lines.

Unique Networking and Video Decoding Capabilities


The PowerEdge XE9680, augmented with the Gaudi3 accelerator, introduces novel networking capabilities directly integrated into the accelerators via six OSFP 800GbE ports. These links allow for direct connections to an external accelerator fabric without the need for external NICs to be placed in system. This not only simplifies the infrastructure but also aims to lower the total cost of ownership and complexity of an infrastructure. Furthermore, the Intel Gaudi3 specialized media decoders are designed for AI vision applications. These are capable of handling extensive pre-processing tasks, thus streamlining video-to-text conversions and enhancing performance for enterprise AI applications.

Dell PowerEdge XE9680 with the Gaudi3: A Visionary Step Forward in AI Development


The collaboration between Dell and Intel, crystallized in the Dell PowerEdge XE9680 with the Intel Gaudi3 AI accelerator, is a watershed moment in AI computing. It offers a forward-looking solution that meets the current demands of AI workloads and anticipates the future needs of the industry. This partnership promises to empower technology leaders with advanced tools for innovation, pushing the boundaries of AI development and setting new standards for computational excellence and efficiency.

Source: dell.com

Saturday 13 April 2024

Connected PCs: Expanding Dell’s Relationship with AT&T

Connected PCs: Expanding Dell’s Relationship with AT&T

In a world increasingly reliant on seamless connectivity, the need for reliable and secure network access has never been greater. Recognizing this, Dell is teaming up with AT&T to introduce a new promotion aimed at meeting the evolving needs of modern businesses.

Addressing Modern Connectivity Demands


As technology continues to advance, cellular connectivity has become indispensable, particularly in supporting the demands of GenAI use cases and the needs of remote and hybrid workers. According to research conducted by Dell on the GLG platform, there’s a growing expectation that cellular connectivity, especially with the promise of 5G, will play a more significant role in the future. If you require integrating data from distant locations, satellite and cellular connections might be the sole viable choices.

Moreover, connectivity isn’t just essential for individual users; it’s integral to the modern PC experience across various industries. From real estate to construction and professional services, connected PCs are seen as essential tools, enabling seamless communication and collaboration.

However, poor connectivity remains a significant frustration for many employees, ranking among the top three technology challenges in the workplace. With hybrid work becoming the new norm, the demand for reliable connectivity has only intensified. A significant number of companies have embraced a hybrid work model, with most employees favoring a blend of office and remote work arrangements.

Security Concerns in the Age of Hybrid Work


Despite the benefits of connectivity, security concerns loom large, especially in a hybrid work environment. Security breaches and cybercrimes pose significant threats, exacerbated by the increasing reliance on public Wi-Fi networks. Security professionals consistently overlook the fact that their own employees, contractors, and other internal users unknowingly facilitate (though not always intentionally perpetrate) numerous successful network breaches.

The integration of AI further heightens security risks, particularly with sensitive data and cloud-based applications. As AI applications become more pervasive across industries like healthcare, customer service and autonomous vehicles, ensuring data security becomes paramount.

Enhanced Security with Connected PCs


To address these challenges, Dell and AT&T are introducing a carrier promotion that not only enhances connectivity but also prioritizes security. By leveraging encrypted and secure cellular networks, connected PCs offer protection against unauthorized access and cyberattacks. Additionally, these PCs comply with the latest security standards and certifications, providing end-to-end security through Dell’s suite of security solutions.

Seamless Integration and Enhanced Performance


Connected PCs: Expanding Dell’s Relationship with AT&T
Connected PCs powered by AT&T’s network offer more than just security; they also deliver enhanced performance and a seamless user experience. With high-speed 4G and 5G networks, users can expect low-latency data transmission, supporting the data and computing demands of AI applications. This translates to better support for mobile and hybrid workers, enabling flexible work schedules and instant connectivity without the hassle of tethering or relying on public Wi-Fi networks. Giving priority to 5G as the main connection option in places where wireline alternatives are insufficient or unavailable should be of paramount importance.

The Carrier Referral Program: Simplifying Connectivity


Central to this promotion is the Carrier Referral Program, designed to accelerate the adoption of Connected PCs by simplifying the process of selecting, purchasing and activating carrier data plans. Targeting any commercial customer in the U.S. with a need for remote PC connectivity, the program offers up to a $100 bill credit and discounted monthly rates from AT&T for customers who activate service on eligible plans. Additionally, customers receive concierge support from AT&T to help select the right data plan and facilitate service activation at their convenience. Margaret Rooney-McMillen, AT&T Head of Strategic Partner Sales and Business Solutions VPGM, said, “AT&T is excited about this opportunity with Dell. Secure connectivity is critical to business workers, who are the heart and soul of AT&T business. The concierge service we’ve built to support Dell’s Carrier Referral Program is purpose driven, showcasing customer experience as our highest priority. We’re looking forward to seeing the success of this collaboration.”

Empowering Connectivity for the Future


Dell’s carrier promotion with AT&T represents a significant step forward in empowering connectivity for the future. By addressing the dual needs of reliable network access and robust security, this collaboration promises to redefine the way consumers and businesses experience connectivity. With seamless integration, enhanced performance and dedicated support, Dell and AT&T are committed to making connectivity simpler, safer and more accessible for all.

Source: dell.com

Tuesday 9 April 2024

Plan Inferencing Locations to Accelerate Your GenAI Strategies

Plan Inferencing Locations to Accelerate Your GenAI Strategies

Businesses expect Generative AI (GenAI) to improve productivity, reduce costs and accelerate innovation. However, implementing GenAI solutions is no trivial task. It requires a lot of data, computational resources and expertise.

One of the most critical stages of GenAI model operation is inferencing, in which outputs are generated from a trained model based on user requests. Inferencing can have significant implications for the performance, scalability, longevity and cost-effectiveness of GenAI solutions. Therefore, it’s important for businesses to consider how they can optimize their inferencing strategy and choose the best deployment option for their needs.

Leveraging RAG to Optimize LLMs


Large language models (LLM), such as GPT-4, Llama 2 and Mistral, hold a lot of potential. They’re used for various applications, from chatbots to content creation and even to write code. However, LLMs depend on the data they’re trained on for accuracy.

Depending on the need for customization, some organizations may choose to implement pretrained LLMs, while others may build their own AI solutions from scratch. A third option is to pair an LLM with retrieval augmented generation (RAG), a technique for improving the accuracy of LLMs with facts from external data sources, such as corporate datasets.

Considerations for Where to Run Inferencing


To help determine where to place an inferencing solution, consider important qualifiers such as the number of requests that will be sent to the model, the number of hours a day the model will be active and how usage will scale over time. Additional considerations include the quality and speed of output and the amount of proprietary data that will be used.

Inferencing On-premises Can Save Costs and Accelerate Innovation


For GenAI solutions that pair LLMs with RAG, inferencing on-premises can be a better option than inferencing through the public cloud.

Inferencing LLMs and RAG in the public cloud can be expensive, as they can incur high data transfer, storage and compute fees. According to a recent study commissioned by Dell Technologies, Enterprise Strategy Group (ESG) found that inferencing on-premises can be more cost-effective. Inferencing LLMs and RAG on-premises with Dell solutions can be 38% to 75% more cost effective when compared to the public cloud.

ESG also found that Dell’s solutions were also up to 88% more cost effective compared to APIs. As the size of the model and the number of users increased, the cost effectiveness of inferencing on-premises with Dell grew.

LLMs paired with RAG can generate sensitive and confidential output that may contain personal or business information. Inferencing in the public cloud can be risky, as it can expose the data and outputs to other parties. Inferencing on-premises can be more secure, since data and outputs remain within a company’s network and firewall.

LLMs and RAG can benefit from continuous learning and improvement based on user feedback and domain knowledge. By running inferencing on-premises, innovation can flourish without being bound by a cloud provider’s update and deployment cycles.

Leverage a Broad Ecosystem to Accelerate Your GenAI Journey


At Dell, we empower you to bring AI to your data, no matter where it resides, including on-premises in edge environments and colocation facilities, as well as in private and public cloud environments. We simplify and accelerate your GenAI journey, creating better outcomes tailored to your needs, while safeguarding your proprietary data, with sustainability top of mind.

We offer a robust ecosystem of partners and Dell services to assist you, whether you’re just starting out or scaling up in your GenAI journey and provide comprehensive solutions that deliver ultimate flexibility now and into the future. In addition, with Dell APEX, organizations can subscribe to GenAI solutions and optimize them for multicloud use cases.

Source: dell.com

Saturday 6 April 2024

The Most Manageable Commercial PCs – From Dell

The Most Manageable Commercial PCs – From Dell

Organizations that manage fleets of commercial PCs know just how difficult a task this can be. Meeting the need to configure and manage devices on a timely basis, while juggling urgent update requests, ensuring compliance and meeting established SLAs can at times turn a “run the business” operation into a full-scale escalation.

The truth is that the PC and the applications used to manage these operations are intrinsically connected. You can’t have secure and reliably operating PCs without regular updates—and that requires having an OEM’s specific updates working smoothly and reliably with your PC management suite and tools.

Dell Technologies employs a holistic approach to its PC management processes to help IT admins better manage updates, monitor their fleets and balance both to minimize the impact to end user productivity. Our perspective is that you need to be able to easily manage Dell devices, lessening your administrative burden. We accomplish this by understanding the capabilities that will help you better manage Dell devices. And we then build these capabilities into our processes for both Dell tools and well-known commercial device management applications.

For instance, Dell provides capabilities thoughtfully designed to work together with applications like Microsoft Intune and Workspace One, while leveraging built-in remote “out of band” management features from Intel vPro. The result of these key integrations is that Dell PCs are the industry’s “most manageable commercial PCs,” benefitting organizations through tested, predictable updates, more easily maintaining device security posture and compliance and providing a built-in capability that allows a device to be tracked down and remotely managed.

For instance, the Dell Trusted Update Experience makes it simple to update endpoints with the latest BIOS, driver and firmware versions.

Dell is the only Top-5 PC vendor that:​

  • Publishes a device drivers and downloads release schedule, so IT admins can deploy fleet-wide device-updates on a predictable timeline.
  • Performs integrated validation of all driver and BIOS modules in an update. IT admins

The Most Manageable Commercial PCs – From Dell

Dell’s commercial PC systems management solution, Dell Client Command Suite, has industry-leading integrations with unified endpoint management solutions with Microsoft Intune and Workspace ONE. IT admins can tap into these integrations to further streamline the management of Dell commercial PCs.


  • Securely configure BIOS settings, on a fleet of Dell commercial PCs, natively in Microsoft Intune.
  • Configure unique per-device BIOS passwords, on a fleet of Dell commercial PCs, natively in Microsoft Intune.

With Dell Client Command Suite’s integration with Workspace ONE, IT admins can securely manage BIOS, firmware, OS and system updates—from the cloud.

And with Dell Command | Intel vPro Out of Band, IT admins can remotely manage systems, even when out-of-band—where the system is offline or has an inaccessible operating system.

By offering a unique, simplified and reliable BIOS, driver and firmware version update process, and with industry-leading third-party systems management solution integrations, Dell commercial PCs are designed to be the most manageable commercial PCs. Organizations benefit from PCs that are easier to manage, more secure and more predictable—with less risk of employee work interruption.

Source: dell.com

Thursday 4 April 2024

Dell Technologies Powering Creativity from Studio to Screen

Dell Technologies Powering Creativity from Studio to Screen

When I mention to people that I’m gearing up to attend the National Association of Broadcasters event in Las Vegas with the Dell media and entertainment team, I always receive a lot of interest in how Dell Technologies supports this exciting industry. You may be surprised to learn that Dell Technologies has been helping major film studios, broadcasters and game developers create their media for years. Dell’s longstanding and proven reliability has positioned us being a highly regarded and trusted industry brand.

Over the years, Dell has been committed to driving innovation within their technology portfolio, from workstations to displays, as well as partnering with other industry leaders like NVIDIA and AMD to enable creatives to do their best work. An example of this is the incredible work produced by Orbital Studios using the Dell Precision Workstations. Check out the recent video Orbital produced discussing the role technology played in their recent efficiency gains.

As part of the storage team for Dell, we get to work on helping customers with processing and managing massive amounts of data. Storage has always been important for artist workflows like video edit, broadcast, visual effects and animation for films and episodic television, but newer workflows like AI, virtual production and game engine technology have emerged on the scene and are making storage, particularly all-flash storage, like Dell PowerScale, extremely valuable to accelerating workflows.

Generative AI (GenAI), fueled by machine learning algorithms, is shifting the paradigm across industries. But in the media and entertainment realm, this technology enhances creativity and amplifies human potential by allowing creators to fine-tune their vision, from generating realistic human faces to crafting entire landscapes. Filmmakers can now utilize GenAI to streamline pre-production processes such as concept art creation and storyboarding. Additionally, it opens up new avenues for procedural content generation in video games, enabling vast and dynamic virtual worlds. Still, other AI applications are worthy of more than an honorable mention because they’re creating dramatic efficiency gains, particularly in repetitive tasks, like rotoscoping.

Another interesting advancement for the industry is virtual production. Virtual production has revolutionized the filmmaking process, blurring the lines between physical sets and digital environments. By integrating real-time rendering technology with traditional filmmaking techniques, directors can visualize scenes in virtual environments, saving time and resources during production. From capturing complex visual effects to exploring alternative camera angles on the fly, virtual production empowers filmmakers to unleash their creativity like never before.

And then we have game engines, which have long been at the forefront of interactive entertainment, powering everything from AAA video games to immersive virtual reality experiences. However, their versatility extends beyond gaming, with applications in film, television, and advertising. By leveraging game engines like Unreal Engine and Unity, content creators can prototype, visualize and iterate on their ideas in real-time, facilitating collaboration and experimentation throughout the creative process.

These technology advancements are ultimately allowing media and entertainment companies to take audiences to new levels. Consider the latest Avatar or Mad Max movies as prime examples of how a studio can leverage these next-gen practices to break the ceiling on immersive storytelling. I’m filled with wonder considering the possibility of what broadcasting, movies and gaming will look like in the near and distant future. The possibilities stretch past the reaches of my mind.

That said, the computational demands of these innovative workflows are substantial. Processing enormous datasets and, in the instance of AI, training complex neural networks, requires powerful infrastructure. This is where PowerScale comes into play and why I’m so excited to join this incredible team. PowerScale offers a scalable architecture and the high-performance processing power needed to efficiently tackle large-scale media and entertainment workloads.

As you might imagine, I can become quite long-winded given the opportunity to talk about how Dell is helping to drive innovation in the entertainment space, but I know my enthusiasm is ultimately understood. Enabling creatives to push the boundaries of their imagination elevates our collective entertainment experiences to new and exciting levels. And who doesn’t want more of that?

Source: dell.com

Tuesday 2 April 2024

Reduce the Attack Surface

Reduce the Attack Surface

Advancing cybersecurity and Zero Trust maturity starts by focusing on three core practice areas: reducing the attack surface, detecting and responding to cyber threats and recovering from a cyberattack throughout the infrastructure, including edge, core and cloud. This blog post will focus on the reducing the attack surface—a critical component of cybersecurity, helping strengthen your security posture.

The attack surface refers to all potential areas in an environment that a cyber attacker can target or exploit. These points can include software vulnerabilities, misconfigurations, weak authentication mechanisms, unpatched systems, excessive user privileges, open network ports, poor physical security and more.

Reducing the attack surface is a cybersecurity concept and strategy that focuses on minimizing the potential vulnerabilities and entry points that attackers can exploit to compromise a system, network or organization across various domains including the edge, the core or the cloud. Reducing the attack surface decreases the opportunities for malicious actors to launch successful cyberattacks, while at the same time creating a safe space for organizations to innovate and thrive.

To reduce the attack surface, organizations employ various measures and strategies, including:

  • Apply Zero Trust principles. Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside their perimeters and instead must verify everything trying to connect to their systems before granting access. Organizations can achieve a Zero Trust model by incorporating solutions like micro-segmentation, identity and access management (IAM), multi-factor authentication (MFA) and security analytics, to name a few.
  • Patch and update regularly. Keeping operating systems, software and applications up to date with the latest security patches helps address known vulnerabilities and minimize the risk of exploitation.
  • Ensure secure configuration. Systems, networks and devices need to be correctly configured with security best practices, such as disabling unnecessary services, using strong passwords and enforcing access controls, to reduce the potential attack surface.
  • Apply the principle of least privilege. Limit user and system accounts to only have the minimum access rights necessary to perform their tasks. This approach restricts the potential impact of an attacker gaining unauthorized access.
  • Use network segmentation. Dividing a network into segments or zones with different security levels helps contain an attack and prevents lateral movement of a cyber threat by isolating critical assets and limiting access between different parts of the network.
  • Ensure application security. Implementing secure coding practices, conducting regular security testing and code reviews and using web application firewalls (WAFs) help protect against common application-level attacks and reduce the attack surface of web applications.
  • Utilize AI/ML. Leverage these capabilities to help proactively identify and patch vulnerabilities, significantly shrinking the attack surface. AI/ML tools can help organizations scale security capabilities.
  • Work with suppliers who maintain a secure supply chain. Ensure a trusted foundation with devices and infrastructure that are designed, manufactured and delivered with security in mind. Suppliers that provide a secure supply chain, secure development lifecycle and rigorous threat modeling keep you a step head of threat actors.
  • Educate users and promote awareness. Training employees and users to recognize and report potential security threats, phishing attempts and social engineering tactics helps minimize the risk of successful attacks that exploit human vulnerabilities.
  • Use experienced professional services and partnerships. Collaborating with knowledgeable and experienced cybersecurity service providers and forming partnerships with business and technology partners can bring in expertise and solutions that might not be available in-house. This can enhance the overall security posture of an organization.

Starting with an assessment and performing regular audits, penetration testing and vulnerability assessments, along with the help of experienced services or partners, can help identify areas for improvement within your attack surface. As cyber threats continue to evolve, it’s important to remember cybersecurity is not a one-time task but an ongoing process. And as organizations look to build a robust, thriving, innovative company, cybersecurity is paramount. By proactively implementing these measures, organizations can effectively reduce the attack surface, helping to mitigate risks and making it more challenging for adversaries to exploit vulnerabilities, enhancing the overall defense posture against new and emerging threats. Reducing your attack surface helps you to advance your cybersecurity maturity.

Source: dell.com