Saturday 31 December 2022

Exploring the Interconnected Datacenter with Dell APEX and Equinix

Dell EMC Study, Dell EMC Career, Dell EMC Skill, Dell EMC Jobs, Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Prep, Dell EMC Preparation

Public clouds entered the mainstream because businesses were seeking a way to get up-and-running quickly in order to develop new applications and avoid the costs and management overhead associated with large CapEx IT investments. The industry has learned not all applications are ideal to run in the public cloud. Some users found out the hard way that performance, availability, scale, security and cost predictability could be difficult to achieve. Therefore, the cloud operating model is quickly becoming pervasive on-premises.

As cloud deployment options have evolved, leading-edge organizations have aspired to make strategic use of multiple clouds – public and private. With increasingly dispersed data, this calls for a distributed deployment model. The goal is to achieve multicloud by design rather than standing up disparate silos across clouds, also known as multicloud by default.

However, the distributed deployment model is not easy to deliver, since standing up datacenters is complex and requires significant financial commitments. How can organizations embrace a modernized datacenter model with private cloud? One way is to leverage as-a-service IT resources in colocation (colo) facilities wherever they need to deploy applications. In a recent IDC survey of U.S.-based IT decision makers, 51% indicated they were using hosting and colocation services for data center operations. In addition, 95% of responders indicated they plan to use such services by 2024.

Enter Dell APEX and Equinix. Customers have the option to subscribe to interconnected Dell APEX Data Storage Services in a secure Dell-managed colocation facility, available through a partnership between Dell and Equinix. This solution enables a quick and easy deployment of scalable and elastic storage resources in various locations across the globe with high-speed access to public clouds, IT services and business collaborators, all delivered as-a-service.

To dig deeper into the trends toward as-a-service and colocation to address multicloud challenges, Dell commissioned IDC to develop a Spotlight Paper that features the Dell APEX and Equinix offer.

Here is a snapshot of some of the key benefits:

Dell APEX Delivers:


Simplicity. Gain a unified acquisition, billing and support experience from Dell in the APEX Console with simplified operations and a reduced burden of datacenter management.

Agility. Expand quickly to new business regions and service providers. Build a cloud-like as-a-service experience that offers fast time-to-value and multicloud access with no vendor lock-in.

Control. Leverage leading technologies and IT expertise on a global scale. Deliver a secure, dedicated infrastructure deployment with the flexibility to connect to public clouds while maintaining data integrity, security, resiliency and performance.

Equinix Services Provide:


Cloud adjacency. Dedicated IT solutions (cloud, compute, storage, protection) are directly connected and sitting adjacent, in close physical proximity to public cloud providers, software as-a-service providers, industry-aligned partners and suppliers.

Interconnected Enterprise. Digital leaders leverage Equinix to align Dell’s IT solutions with organizational and user demands across metro areas, countries and continents.

Intelligent Edge. Enterprises aggregate and control data from multiple sources – at the edge and across the organization – and then provide access to artificial intelligence, machine learning and deep analytic engines without moving the data. This deployment future-proofs access to data, radically reducing data egress costs while connecting to the engines of innovation now and in the future.

The combination of Dell APEX and Equinix interconnected colocation offers true differentiation and is key for organizations to truly achieve digital transformation goals in the multicloud world.

Source: dell.com

Thursday 29 December 2022

Data-Driven Innovation Meets Sustainable PC Design: Concept Luna’s Evolution

Dell EMC Study, Dell EMC Tutorial and Materials, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC

Imagine a future where we don’t simply discard used electronics. Rather, we harvest individual components for a second, third or even fourth life. Once the device itself is truly at the end of life, we refurbish and recycle it to incorporate these same materials into next-generation laptops, monitors or phones. It’s a future where nothing goes to waste, and we dramatically reduce the mountain of electronics discarded every year – more than 57 million tons, globally, to be exact. Not only is technology dematerialized, but also the materials we use fuel a robust circular economy, thereby, reducing the need for new, raw materials.

Last year, we introduced Concept Luna, our breakthrough sustainable PC design, which illustrates our vision of how we can reduce waste and emissions, reuse materials and achieve next-level innovation. Our Experience Innovation Group engineers have worked over the last year to further refine the modular design of Concept Luna, eliminating the need for adhesives and cables, and minimizing the use of screws. These refinements make it easier to repair and dismantle a system. Concept Luna could dramatically simplify and accelerate repair and disassembly processes, making components more accessible and expanding opportunities for reuse.

It can take recycling partners more than an hour to disassemble a PC with today’s technology, held together with screws, glues and various soldered components. With our evolved Concept Luna design, we’ve reduced disassembly time to mere minutes. We even commissioned a micro-factory to guide our design team, resulting in a device that robots can quickly and easily take apart.


By marrying Concept Luna’s sustainable design with intelligent telemetry and robotic automation, we’ve created something with the potential to trigger a seismic shift in the industry and drive circularity at scale. A single sustainable device is one thing, but the real opportunity is the potential impact on millions of tech devices sold each year and optimizing the materials in those devices for future reuse, refurbishment or recycling.

The telemetry we added to Concept Luna also provides the opportunity to diagnose the health of individual system components to help ensure nothing goes to waste. Because the way customers use their technology varies, not all components reach end-of-life at the same time. People working from home, for example, may use external components, such as keyboards and monitors. The laptop’s keyboard and monitor have barely been used, even when the motherboard is ready to be replaced. Our Concept Luna evolution can equip and connect individual components to telemetry to optimize their lifespans. At its simplest, it’s akin to how we maintain our vehicles: we don’t throw away the entire car when we need new tires or brakes.

Our ongoing work with Concept Luna brings us closer to a future where more devices are engineered with a modular design. The exciting addition of robotics and automation serve as a catalyst to accelerate efficient device disassembly, measure component health and remaining usability, and better understand which components can be reused, refurbished or recycled – so nothing goes to waste. This vision has broad and profound implications for us, our customers and the industry at large, as we work together to reduce e-waste.

These are the explorations that inspire our team of engineers, passionate sustainability experts and designers to continue to evolve Concept Luna. And while Concept Luna is “just a concept” right now, it is a long-term vision for how we achieve an even greater business and societal impact through circular design practices. As we make strides toward achieving our Advancing Sustainability goals, we will continue to innovate, push design boundaries, solicit feedback and rethink business models. Driving breakthrough advancements and shaping a more sustainable future for all is what Concept Luna is about. I am honored to be a part of this journey.

Source: dell.com

Tuesday 27 December 2022

Taming Big SAP Data Landscapes

Dell EMC Study, Dell EMC Certification, Dell EMC Prep, Dell EMC Preparation, Dell EMC Tutorial and Materials

Extending Dell’s comprehensive portfolio of SAP HANA TDI certified systems, the new Dell S5000 Series server, from six to 16-sockets and up to 24TB of memory, delivers performance and scale to organizations with large landscapes that want to deploy on-premises using shared infrastructure.

Dell EMC Study, Dell EMC Certification, Dell EMC Prep, Dell EMC Preparation, Dell EMC Tutorial and Materials
For many large organizations, SAP HANA offers the capabilities that are critical for operating in a modern, digital economy, including real-time processing, advanced analytics and insights gleaned from big data and the Internet of Things. Processing large data volumes, expected to increase by more than 21% CAGR over the next five years, organizations need an infrastructure solution that can scale with the demands of the business and enable them to grow for years in the future.

To keep up with this increasing amount of data, many organizations are making the transition to the cloud. However, some organizations needing to respond to data sovereignty and data residency constraints, or address application latency and data entanglement concerns, want to retain complete control over their systems and data. There are also organizations that want to continue to protect current investments in on-premises SAP infrastructure, tools and operational processes.

SAP HANA Tailored Datacenter Integration (TDI) solutions offer flexibility when integrating SAP HANA systems into existing data centers. This deployment approach enables organizations to choose their preferred hardware vendors and infrastructure from a list of supported SAP HANA hardware. Organizations can also leverage existing hardware and operation processes in their data centers.

Optimize for Big SAP HANA TDI Landscapes


To address these challenges, we have released the Dell S5000 Series server, exclusively for SAP HANA TDI. Powered by Intel® Xeon® Scalable (Cascade Lake) processors, the up-to-eight-socket Dell S5408 8U rack and the up-to-16-socket Dell S5416 21U rack are delivered as integrated, pre-configured systems for quicker deployment and faster time to value.

Delivering exceptional real-time performance and enabling flexible, cost-effective and reliable response, the Dell S5000 Series server:

◉ Boosts real-time applications with industry leading performance. Delivers all the benefits of modern, in-memory database and applications with up to 24TB of memory and industry-leading two-tier SAP SD application benchmark performance for eight-socket Intel® Cascade Lake, SAPS of 674,080.

◉ Optimizes total-cost-of-ownership with granular scalability. Reduces overprovisioning, expensive and disruptive upgrades, and re-platforming with easy expansion from six to 16-socket with a unique architecture that allows growth in increments with a two-socket base unit.

◉ Keeps operations running with mission-critical reliability. Anticipates potential failures and simplifies preventive maintenance with thousands of control points, early warning features, an Intel® RAI feature that includes Run Sure® technology and several innovative memory protection features.

Broad Portfolio of Offerings


Dell Technologies offers one of the industry’s broadest and most innovative server portfolios certified for SAP HANA TDI. Organizations can choose two and four-socket Dell PowerEdge servers for up to 6TB of memory – and now six to 16-socket Dell S5000 Series servers for up to 24TB of memory. All servers are available with a choice of processors for the best fit in terms of frequency and number of cores or power consumption. They can be combined with SAP-certified Dell networking, storage and data protection solutions – with all the infrastructure backed by the Dell services ecosystem to keep your business running. 

Source: dell.com

Sunday 25 December 2022

Technology is a Catalyst for a More Sustainable Future

Dell EMC Study, Dell EMC Career, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC Skills, Dell EMC Jobs

2022 is expected to rank among the ten warmest years on record, according to the US National Oceanic and Atmospheric Administration. This is just one of the many effects of climate change.

Technology is essential to tackling the biggest issues we are facing today. One key focus at this year’s COP27 was the promise of innovation and sustainable technologies. According to Gartner, sustainable  technology has become one of the top three priorities for investors and is among the top 10 strategic technology trends for 2023.

At Dell Technologies, we put sustainability at the core of everything we do, setting strong commitments and taking the right actions to reduce our environmental impact and drive positive outcomes for business and society. From how we make our innovative products to what our customers, partners and communities can do with them, our technology will help create a better, more sustainable future.

Here are some ways we can leverage technology to achieve positive outcomes for business and the environment.

Efficient Data Centers for a Digital-first World


One of the most energy-intensive resources across all industries is data centers. With nearly 40% of the energy going to cooling systems to maintain a temperature-controlled environment, estimates suggest data centers account for up to 5% of global greenhouse gas emissions.

The silver lining? Energy efficiency best practices and enhancements in IT hardware and cooling technologies have curtailed the growth in energy demand from data centers globally. Dell recently launched a green data center in India for leading Fintech company PhonePe, designed and built with advanced alternative cooling technologies. It uses liquid immersion technology and is optimized for increased energy efficiency, resulting in significant energy savings and reduced carbon footprint.

Designing Servers with Purpose


Now more than ever, we are considering how we source and produce technology, as well as our portfolio’s impact on the environment. At Dell, we define efficient design as one that maximizes the amount of work completed with the fewest resources possible.

That is why our next generation of PowerEdge servers offer advancements in Dell Smart Cooling technology and feature a greater core density that reduces heat generated, energy consumed and the burden on other resources required to power the systems. Engineering advancements have helped us reduce our energy intensity in PowerEdge servers by 83% since 2013 and increase our energy efficiency by 29% from previous generations. PowerEdge servers also contain up to 35% recycled plastic. Dell also offers a multipack option, allowing servers to be delivered more sustainably when shipping multiple servers. Following this strategy ensures each box wastes no space, energy or opportunity.

Take Trintech, a rapidly growing financial SaaS provider, as an example. By migrating its SQL server workload onto Dell PowerEdge Servers, Trintech experienced gains beyond its sustainability goals. With the capacity to support three times the original number of customers, the company achieved increased revenue, flexibility, scalability, and ease of deployment and management.

Closing the Loop with 3 Rs: Repair, Reuse, Recycle


The world produces more than 50 million tons of e-waste each year, and when they are not disposed of properly, they bring adverse harm to human health and the environment. We are committed to shifting from a linear to a circular economy at Dell. This means ensuring products no longer in use are repaired for reuse or recycled to keep products and materials in circulation for longer. Not doing so would render them nothing more than waste, leading to environmental degradation. Dell has committed to reuse or recycle one product for every equivalent product sold and that we will make 100% of our packaging and over half of product content with recycled or renewable materials by 2030.

Last December, we launched Concept Luna a prototype of a laptop designed to explore revolutionary design ideas that make components immediately accessible, replaceable and reusable. Every facet of the system is meticulously designed to reduce resource use and keep even more circular materials in the economy. If realized, we could expect an estimated 50% reduction in our overall product carbon footprint.

As António Guterres, Secretary-General of the United Nations, puts it bluntly: “Prioritize climate or face catastrophe.” Businesses have a part to play in preserving and restoring the environment, even in the face of global headwinds. The good news is, we have the right tools and processes to solve today’s most pressing climate issues. By leveraging technology as a catalyst, we will start a chain of positive actions that puts us on a path to a better, more sustainable future.

Source: dell.com

Saturday 24 December 2022

Dell is Democratizing Data with SRE

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Guides, Dell EMC

While Site Reliability Engineering (SRE) begins with gathering data from across IT organizations to create a bird’s eye view of ecosystems so we can monitor, fix and prevent system issues, the value of that aggregated data doesn’t stop there. Sharing observability data beyond SRE engineers to teams across those organizations not only increases transparency, but it also taps the potential for new improvements and innovation.

That’s why Dell Digital’s Site Reliability Engineering Enablement team is making data available to product owners, business users and operations teams across Dell IT via a two-way chat tool we call the SRE Assistant.

Beyond using chat to provide team members easily understandable insights into ecosystem issues, the SRE Assistant is evolving to give users access to other non-systems-health data –from sales numbers to customer satisfaction – with a simple query.

It’s all part of democratizing data so team members across Dell IT’s organizations, regardless of their technical know-how, can work with data comfortably, feel confident talking about it and, as a result, make data-informed decisions and build customer experiences powered by data.

Democratizing System Alert Data


Dell Digital, Dell’s IT organization, began piloting an SRE strategy two years ago to reduce downtime in our eCommerce environment. We expanded this effort to create an SRE initiative to help organizations across IT use SRE practices to improve their product reliability and increase maintenance efficiency.

Part of the SRE process is targeting specific development teams impacted by issues with system alert notifications that use those teams’ preferred communication channels. As we created these incident alert communication channels, we realized it made sense to make them two-way, so users could both receive alerts and ask questions. To that end, we built a chatbot using the collaboration tool framework, channeling alerts to specific team members who could, in turn, seek further details via chat.

An important goal of our alert notification strategy was standardizing and simplifying the alert data we send when issues arise.

Our SRE observability tool aggregates a wide range of data to provide a bird’s eye view and determine solutions. Our data might come from a network device, a storage device, an application database, or applications for ecommerce and order, inventory and incident management. Alerts involve different stakeholders, different KPI and different thresholds where something is breaching.

To help team members at all levels of technical knowledge easily interpret issue alerts, we consolidate multiple dashboard metrics and categorize incidents using a scoring system based on percentages: 0% to 100%, and code them red, yellow and green.

This is a key step in democratizing data. For example, let’s say there’s a network outage in a data center. Traditionally, only the network team is notified about it right away. Other team members might come to know about it later in the process and may not be comfortable with technical details of the event.

By breaking alert data down in a way that everyone understands, it doesn’t matter if alert recipients know that subject or not. Someone may not know anything about networks or databases, but the color or grade of the alert offers a basic and quick understanding of the problem.

Simplifying Data Requests


With more team members accessing and understanding system health data via our chatbot, we had another inspiration. What if we share the extensive array of data we have collected from the SRE practice more widely?

We have aggregated data on sales, our service management platform, our application stack, networks, databases and more. We decided this data is rich enough to evangelize to a broader audience of team members.

The SRE Assistant could pull out data from what we have collected in response to specific team member requests. It would access APIs (application programming interfaces), which for much of the data would fetch requested information from our observability tool. It could also fetch data from non-SRE sources using APIs. A salesperson could get daily order totals. A service provider could check customer satisfaction numbers. And since SRE Assistant is available on our main collaboration tool, they can do so on their mobile device.

This data is available in separate tools across IT, but until now there was no single source where team members could get that information all in one place.

This is the other aspect of democratizing data: breaking down tool silos and bringing together necessary information where users are in their communication channels.

Not only are team members able to ask questions using the SRE Assistant about IT system performance, but they can also now ask about a business function, such as how it’s performing at a given point of time.

Our chatbot is a bit like the ubiquitous digital assistants Alexa or Siri. Users just frame a question in the bot, and the SRE Assistant will use APIs to pull the relevant information from a source and present it in the chat.

Taking Our Data Capabilities on the Road


While our team hasn’t formally unveiled it to users yet, the SRE Assistant chatbot is an idea that has been well-received so far by the limited number of current users.

The data selection provided has grown organically as team members have added requests. On the alert notification front, we have seen a lot of promise around increasing team member collaboration in response to sharing system issues. With alerts now being made available teamwide via the bot, everyone sees the same thing and there is an urgency to fix things.

Overall, the cross-pollination of information the SRE Assistant provides blurs the silos and encourages outreach and collaboration. It increases transparency about system performance. And perhaps most importantly, because it uses our central collaboration tool that is available on mobile devices, users can access alerts and data wherever they are. So, they receive up-to-date information about their systems and can make queries with ease.

In the coming months, we’re sharing the SRE Assistant more broadly across teams, product owners and the business community using our SRE Enablement program, educating them on its chatbot capabilities.

We are convinced there is a lot to be learned from sharing our SRE data across IT. We expect using our data wisely will yield better opportunities to improve how we serve our customers.

Source: dell.com

Thursday 22 December 2022

Improving Sustainability with AI and Making AI More Sustainable

Dell EMC Study, Dell EMC Guides, Dell EMC Certification, Dell EMC Skills, Dell EMC Preparation, Dell EMC Prep Exam

Organizations across a wide range of industries are stepping up efforts to become more sustainable and energy efficient. Increasingly, artificial intelligence (AI) is becoming a trusted tool to help them accomplish those goals. 

There are many reasons why organizations are adopting sustainable practices. Nations around the world are passing new legislation to encourage — and in some cases mandate — carbon-neutral practices. Investors are looking to fund companies with a reputation for environmental responsibility. Consumers are expressing their views with their wallets, purchasing products that are eco-friendly.  

With pressure from all sides, it’s no wonder business leaders want to become more sustainable. To help them achieve these goals, many companies have turned to AI to help them optimize efficiency, identify areas of the business that aren’t operating optimally and mitigate risk. 

Sustainability Use Cases for AI 


Some of the most obvious applications of AI to improve sustainability are in the energy industry. AI can help utilities optimize production and delivery to improve efficiency and reduce harmful emissions. AI-powered analytics are improving weather and demand forecasting to help utilities prepare for severe weather. And oil and gas companies are using AI to analyze ground scans to find the best places to drill, reducing waste and minimizing damage to the environment. 

In the transportation industry, AI is helping logistics companies streamline routes and speed supply chains to reduce greenhouse-gas emissions (GHG). As the automotive industry converts from internal combustion engines to electric vehicles, manufacturers are using AI to create the algorithms and software that power the new vehicles. And in much the same way oil and gas companies use AI to help them find the best places to drill, mining companies are using AI to help them find the best places to access the metals necessary for electric vehicle batteries, helping to decrease pollution and disruption to nature. 

In the agriculture sector, Nature Fresh Farms is one of the largest independent greenhouse produce growers in Canada. They analyze data and high-resolution video as close to the edge as possible to enable growers to create optimal conditions to enhance produce quality and yield. To reduce overall water usage, spoon-like devices under each plant measures how much water doesn’t get used so they can adjust irrigation as needed.  

Companies in many different industries are using AI to control new smart buildings that make better use of energy. For example, Siemens is helping customers reduce their buildings’ carbon footprints by leveraging edge and AI technologies to address building performance issues in real time. And AI is helping data centers reduce both power consumption and the need for cooling. That’s important because while AI offers a lot of potential benefits for the environment, it also consumes a lot of energy. 

Reducing the Energy Required to do AI  


AI applications are compute- and memory-intensive. They require significant electricity to run the servers that power them and to cool down the data centers where they operate.  

Fortunately, there are steps organizations can take to minimize the negative environmental impacts of their AI efforts. One of the most impactful steps is to purchase infrastructure designed to be as energy efficient as possible. At Dell Technologies, we’re helping our customers drive positive solutions to achieve their sustainability goals through the power of innovative products and services designed to reduce waste, energy use and emissions. Since 2013, we have reduced the energy intensity of our PowerEdge servers by 83% (based on internal analysis from June 2022). Through continued innovation, what required six servers in 2013 can now be accomplished with one server today.

When it comes to Dell storage solutions, here again we are reducing the energy intensity of our technology and making it more efficient with each new generation. Dell PowerMax delivers 80% power savings per TB2 compared with the previous generation, and Dell PowerStore 3.0 delivers up to 60% more IOPS per watt. Finally, our HCI systems align with sustainability IT goals, with multiple PowerEdge server models achieving EPEAT registered product status.

Investing in innovative technology allows organizations to use AI to improve their sustainability efforts while also minimizing the negative impact of AI on the environment.  

Source: dell.com

Tuesday 20 December 2022

Why Automated Discovery Changes the Game for NVMe/TCP

Back in August, I wrote about how you can boost your workload performance with an NVMe IP SAN and how Dell Technologies helps simplify adoption by validating the components used within the NVMe/TCP ecosystem and providing automation with SmartFabric Storage Software. In this post, I want to focus on the automated discovery, our secret sauce to simplify the deployment.

When deploying NVMe/TCP, organizations often face a common challenge: managing connectivity between the host and storage array. This is due to the considerable amount of work involved, including manually configuring each host to point at the appropriate array-based Discovery Controller, monitoring the established connections and remediating any connectivity failures that may occur.

How SmartFabric Storage Software’s Automated Discovery Solves the Challenge


SmartFabric Storage Software (SFSS) removes the need for configuring each host interface to each storage interface one-by-one. Instead, host and subsystems automatically discover the SFSS instance and register with it. SFSS then acts as a broker to help establish communication between the hosts and subsystems. Customers can configure the host and subsystem relationships via zoning, just as with Fibre Channel (FC).

Ease of deployment, especially in medium and large enterprise environments, is a principal reason some users prefer FC over other IP-based technologies, despite the cost-saving potential and performance improvement. For network infrastructure deployment, Ethernet costs is up to 89% less than FC in some solutions.

In addition, a standards-based discovery service such as SFSS, that is scalable and automated, is a big boost for other operating environments (e.g., Edge and Cloud).

Dell EMC Study, Dell EMC Prep, Dell EMC Certification, Dell EMC Guides, Dell EMC Learning, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs

How Automated Discovery Works


The existing discovery approach for NVMe/TCP requires customers to repeat the following three steps on all hosts for each subsystem. First, the host admin connects the host to a Discovery Controller at a specific IP Address. Next, the storage admin provisions Namespaces (storage volumes) to the host NQN. Only then can the host admin now discover and connect to the I/O Controllers on that subsystem.

This process might work well in smaller fabrics with a few dozen host and storage endpoints. However, manually configuring and discovering each storage subsystem on a host quickly becomes difficult and error-prone for administrators as fabrics scale.

With the new, standards-based, Centralized Discovery that includes support for automated discovery, after the initial setup, users only need to repeat steps two and three below for each host on each subsystem. This cuts the deployment time while minimizing potential errors.

1. Host and subsystems automatically discover the CDC, connect to it, and register discovery info
2. Zoning performed on CDC (optional)
3. Storage admin provisions namespaces to the host NQN. Storage may send zoning info to CDC
4. After zoning, the host receives AEN, uses get log page, and connects to each I/O Controller

Dell EMC Study, Dell EMC Prep, Dell EMC Certification, Dell EMC Guides, Dell EMC Learning, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs
How automated discovery works.

What’s New for the December 2022 Release of Dell NVMe IP SAN


Dell Technologies continues its commitment to simplifying our customers’ ability to benefit from NVMe/TCP. The new December release expanded the validated ecosystem to PowerStore running PowerStoreOS 3.X. PowerEdge running VMware ESXi 8.0 and PowerSwitch Z9664.

Dell EMC Study, Dell EMC Prep, Dell EMC Certification, Dell EMC Guides, Dell EMC Learning, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs
Complete list of validated Dell’s products.

As for deployment automation, SmartFabric Storage Software now supports VMware ESXi 8.0 and RHEL 9.1 (Tech Preview), with a complete list of OS support below.

Dell EMC Study, Dell EMC Prep, Dell EMC Certification, Dell EMC Guides, Dell EMC Learning, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs
Complete list of SFSS’s OS Support

Dell Technologies is excited about our progress and committed to helping customers improve workload performance while saving costs with NVMe/TCP Technology.

Source: dell.com

Sunday 18 December 2022

Driving 5G Innovation

Dell EMC Study, Dell EMC Career, Dell EMC Study, Dell EMC Prep, Dell EMC Preparation, Dell EMC Career, Dell EMC Skills, Dell EMC Job

Dell was founded in 1984. That may be a long time in our fast-moving industry, but rest assured, we’re not your grandmother’s technology company. We’re always looking for new ways to create value, innovate and drive progress, and nowhere is this more apparent than in the telecommunications space.

A New Era of Telecommunications has Arrived


The telecom industry is rapidly entering a new chapter of 5G and edge-enabled connectivity built on open, cloud-native and software-defined networks. As monolithic, closed-network architectures fall by the wayside, a fast-growing ecosystem of vendors is enabling providers to access best-of-breed technology, increase network capacity and ultimately deliver lucrative new services to both consumer and enterprise.

But treading new ground is not without its challenges, and there remains a lot of work to be done before telecom fully realizes its open future. No single company can do this alone, and collaboration has already proven essential to advancing 5G innovation. Dell Technologies is currently collaborating with more than 30 established and emerging industry initiatives to overcome key strategic challenges, develop validated solutions, find diverse new talent and more.

Reshaping Radio Access Networks — Together


Telecommunications networks are undergoing a process of unprecedented transformation. Having shaken off historic vendor lock-in, operators and providers are now working to build a new breed of network capable of meeting strict SLAs for performance, availability and security. That means developing reference architectures, overcoming complex integration challenges and enabling automation to manage devices distributed across thousands of sites.

As a longstanding contributor to the O-RAN Alliance, Dell Technologies is participating in most of its active working groups. A recent example is the Cloudification and Orchestration Work Group, where we are focused on the decoupling of RAN software from underlying hardware platforms. Together with other members of the alliance, we are producing technology and reference designs that will allow users to leverage commodity hardware platforms for all parts of a RAN deployment.

We are also collaborating with leading partners like Intel and VMware to build an open network ecosystem. We are currently developing a cloud-native Open RAN reference architecture offering a complete solution deployed on Dell PowerEdge XR11, XR12 and R750 servers.

What’s more, Dell Technologies is one of only six vendors selected to be part of the next Gen Research Group (nGRG)’s Technical Oversight Committee. The nGRG is responsible for defining the O-RAN Alliance’s 6G research agenda and key priorities. That includes achieving O-RAN sustainability from 4G/5G to 6G and beyond, as well as unifying the 6G technology path to avoid incompatibility with other SDOs.

Connecting Disruptive Start-ups with Willing Investors and Established Enterprises


If the promise of 5G is to become a reality, disruptive start-ups and visionary thinkers need the freedom to explore, experiment, develop and test new solutions. They need access to investment, as well as the latest telecom equipment.

Dell Technologies is a founding member of the 5G Open Innovation Lab, which has already assembled a global community of 87 start-ups, more than 17 leading enterprises and more than 100 venture capital investors — all in just two years. The lab has generated $1.34 billion of investment so far and helped transform start-up ideas into pressure-tested, enterprise-class, market-ready solutions. In doing so, we have enabled enterprises to innovate with confidence by identifying and deploying solutions that advance digital transformation with reduced risk.

Some recent success stories include collaborating with Expeto, a leading enterprise mobile Networks-as-a-Service (NaaS) provider on several customer opportunities. We’ve helped Sunlight.io, an established hyperconverged platform provider, develop reference architectures in collaboration with another leading cloud-based orchestration solution provider.

Setting the Standard for Innovative New Solutions


Founded in 2016, the Telecom Infra project (TIP) is a community of diverse members that includes hundreds of companies—from service providers and technology partners, to systems integrators and other connectivity stakeholders. TIP helps to develop, test and deploy open, disaggregated, and standards-based solutions that deliver high quality connectivity on a global scale

Dell Technologies holds a seat of the TIP board of directors and Policy Committee, where we collaborate with global policymakers, government organizations and public-private partnerships to enable an accelerated industry-wide shift towards open, disaggregated and standards-based connectivity solutions.

We work closely with TIP project groups on testing and integration of TIP-incubated technologies to evaluate a product’s maturity toward commercial readiness. TIP badges are then awarded to products and solutions that demonstrate adequate levels of maturity against the technical requirements they seek to address. Dell Technologies PowerEdge R750, XR11 and XR12 models have all been awarded the TIP Supplier Validated Product (Bronze) Badge and are now available on the TIP Exchange as a result.

We’re also actively involved in multiple project groups working to define and build 5G RAN solutions based on general-purpose vendor-neutral hardware, interfaces and software. We’re helping to accelerate innovation in optical and IP networks, make 5G private networks accessible to a broad range of use cases and customers, as well as jointly developing a connectivity solution for Wi-Fi, SmallCells and Power over Ethernet (PoE) Switching.

Open Collaboration is More than Valuable, It’s Essential


This is just a taste of how Dell Technologies is contributing to important industry-wide initiatives across the telecom space. It is our capacity to influence and be influenced by these key programs that helps us continually drive innovation, cultivate diverse talent and ultimately deliver on the promise of 5G.

Source: dell.com

Saturday 17 December 2022

Sustainability Doesn’t Need to Be Hard

Dell EMC Study, Dell EMC Career, Dell EMC Guides, Dell EMC Tutorial and Materials, Dell EMC Prep, Dell EMC Preparation

Customers, regulators, employees and the world expect companies today to prioritize sustainability throughout their business. For organizations that rely heavily on technological devices, addressing that can be tricky, putting more responsibility on their ITDMs to simultaneously manage device lifecycles efficiently and effectively, while also looking for ways to achieve long-term corporate sustainability objectives.

Given the complexities inherent in establishing, maintaining and verifying sustainability practices, it is no wonder 71% of companies say they need a partner to accelerate their programs to achieve their sustainability goals.

With the right IT partner and strategy, organizations can reduce their environmental impact and place themselves in a better position to meet company-wide sustainability goals. At Dell Technologies, we believe complete device lifecycle management begins the day a device enters inventory and ends with responsible asset disposition.

It’s Time to Clean Out the Storage Closets


Electronic waste is a growing problem, and companies must ensure they direct unused devices to responsible recycling programs that will properly handle and dispose of the waste. This can be a complex and time-consuming process for IT managers, requiring them to research and select reputable recyclers and to transport and deliver the e-waste for proper disposal. Companies must also ensure their e-waste is fully documented and tracked to comply with local, state and national regulations. A third-party service provider can help overcome these challenges, assuming the vendor has the scale, visibility and experience with the post-use supply chain.

Don’t Let Shipping Materials Overwhelm You


The sustainability challenge posed by the packaging used to ship and receive devices to and from employees is significant, and even more so in this hybrid work environment. Factor in global supply chains, varied product sizes and weights, local regulations, and the need to reduce waste and ensure recyclability and you have a tremendously complex opportunity.

Materials play an important role in the sustainability of these packages. In addition to the environmental costs of producing new packaging materials and disposing of used ones, there are financial costs associated with these practices. Companies must remain aware of changing recycling regulations that can drive up their recycling costs, while also investing in sustainable practices to reduce their carbon footprint. Moreover, not all materials are recyclable, so companies have to seek ways to reuse or repurpose materials that would otherwise end up in landfills.

Get the Most Use Out of Company Assets


One significant challenge businesses face when trying to maximize utilization of desktops, laptops and other electronics is that these devices often aren’t returned when employees leave the company.

Here’s a real-world example of this scenario. A large health plan provider reported several thousand computers were unaccounted for due to high annual employee turnover. This combination of factors could significantly hamper the company’s ability to responsibly repair, reuse or retire its equipment and promote sustainable recycling. By partnering with a device lifecycle services provider, the company quickly turned things around, making significant strides in meeting its business goals.

A Better Way to Dispose of Devices


Managing the retirement of devices on a large scale can be a complex task. Most organizations today only focus on the return of systems — the desktop or laptop — but what about all the peripheral items like monitors, keyboards, mice and headsets? Typically, these items end up in landfills. Companies must consider multiple factors to address this challenge, such as vendor management, secure data disposal and environmental regulations, while adhering to related standards and best practices. Working with the right partner who understands the importance of responsible disposal can reduce risks and ensure those retired devices have lasting value and minimal environmental impact.

Sustainability Benefits Driven with Dell Services


Through services like our Lifecycle Hub, Dell offers impact-generating solutions to help our clients meet their sustainability targets while achieving business outcomes. With capabilities that include return, repair and redeploy services, we help maximize device utilization by getting devices back for reuse or disposition, utilizing packaging materials from recycled or renewable sources as part of the process. We’ve worked hard to make Lifecycle Hub a comprehensive IT device lifecycle management solution that addresses the complexity of managing and maintaining your fleet of electronic devices, while also helping meet corporate social responsibility and environmental stewardship standards.

Lifecycle Hub works alongside our Dell Asset Recovery Services to sanitize devices and prioritize reuse to minimize waste and maximize value. Moreover, if a particular asset is deemed unusable, we responsibly recycle it through reliable and certified third-party partners Dell thoroughly vets. The service is designed for transparency to ensure a beneficial feedback loop for the circular economy.

At Dell Technologies, sustainability and ethical business practices drive the entire company, informing everything from how we make our products to how we design our services to ensure we positively impact clients and make a real difference in our world. We understand the importance of joining together to create a better and more sustainable tomorrow.

Source: dell.com

Thursday 15 December 2022

Enabling Greater Digital Personalization with Graph

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Certification, Dell EMC Prep, Dell EMC Preparation, Dell EMC Learning, Dell EMC Tutorial and Material

Graph technology will be a critical component of future data management architectures that involve the integration of multiple datasets with relationships across the data elements. Graph databases and graph algorithms enable efficient storage, retrieval and processing of data that is interconnected in a complex manner. This technology will provide the ability to query and analyze data in ways that are not possible with traditional relational database systems and enable users to glean powerful insights from the data. Gartner predicts that by 2025, “graph technologies will be used in 80% of data and analytics innovations, up from 10% in 2021.”

Graphs and Its Applications


Graph technology uses a graph data structure for data storage and management. It is a common choice for dealing with connected data, such as social networks, financial transactions and other complex networks.

Graph databases are designed to be efficient and scalable and are used in applications that require fast and flexible data access. Graph technology has several applications in various industries. In healthcare, graph technology can help store and analyze complex patient data, such as medical records, to identify relationships between different treatments, illnesses and effectiveness of treatments across patients. Financial services organizations can use graph databases to identify potential fraud by analyzing transactions and customer data. Social media professionals can use graph technology to identify influencers and trends. And in cybersecurity, graph technology helps detect anomalies and threats.

Overall, Graph technology is a powerful tool for managing and analyzing complex datasets with multi-dimensional relationships. Its scalability and flexibility make it a fitting choice for applications that require fast and flexible data analytics.

Digital Personalization Using Graph


Digital personalization is just one example that benefits from using Graph technology. Companies need innovative ways to target website visitors with relevant and personalized content. To deliver personalized content, companies need to perform data analysis to segment shoppers, display relevant advertisements and trigger personalized email drip campaigns for different shopper segments.

Historically, websites tracked visitors across multiple domains using third-party cookies. However, third-party cookies posed several downsides, including security and privacy concerns, loss of control, potential for retargeting, limited data accuracy and limited reach. Several browsers have already phased out third-party cookies, while the remaining browsers are in the process of doing so. This change will significantly impact the way companies deliver personalized content to their online shoppers.

This makes it imperative for companies to utilize a modern technology utilizing Graph. Graphs capture relationships and connections between data entities, which are then used for data analysis and decision-making. Because data is so connected, graphs are becoming increasingly important by making it easier to explore those connections and draw new conclusions.

When companies utilize a Graph Analytics approach, they are less reliant on third-party cookies, while also addressing the growing need for building a 360-degree view of consumers and analyzing them across other applicable business units and datasets. There are several data sources to utilize to develop the graph for building personalization capabilities, including inventory data, consumer browsing history, transactions (e.g., CRM Data), leads (e.g., CRM Data) and proprietary data.

The following visualization outlines an approach to address current challenges while crafting a vision for graph analytics to drive all aspects of multi-channel marketing personalization.

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Certification, Dell EMC Prep, Dell EMC Preparation, Dell EMC Learning, Dell EMC Tutorial and Material

Why Choose a Graph Database


Considering the focus on performance and TCO, two key technical aspects to consider are data modelling simplification and query performance. Regarding data modelling simplification, traditional relational databases with rigid schemas and relationships pose a challenge in building a data model for a graph problem. The data model is hard to scale if the user identifies new vertices and edges, both in terms of query performance and difficulty of writing the desired query in SQL. Modelling the data as a graph can mimic the business relationships, and this is the first indication that Graph database is a natural fit for this use case.

For query performance, Graph databases offer out-of-the-box query performance that saves time for doing any optimizations. Query performance is expected to scale as additional edges and vertices are added.

Conclusion


Graph technology is rapidly gaining traction in the digital personalization space. By providing an intuitive and efficient way to represent, store and query data, Graph technology enables businesses to better understand and respond to customer needs. It enables organizations to build customer profiles and link them to other related information to provide more accurate and personalized experiences, as well as to create more efficient workflows and marketing strategies.

Graph technology has been especially useful for personalization in e-commerce, where companies can use it to capture customer behavior and preferences. By leveraging a graph database and Graph-based algorithms, companies can better understand customer behavior and create personalized experiences in real-time. Companies can use graph technology to provide personalized recommendations based on customer interests and past behavior. They can also use Graph technology to detect patterns within customer data and uncover insights they can then use to improve customer experiences.

By connecting customer data points and creating 360-degree customer profiles, businesses can identify key trends and develop more effective strategies for segmentation and targeting. Graph technologies and frameworks enable intuitive and efficient representation of data, and enable insights for businesses to deliver more personalized experiences.

Source: dell.com

Tuesday 13 December 2022

Creating Hybrid Multicloud Data Protection Order Out of Chaos

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Prep, Dell EMC Preparation, Dell EMC Learning

If one thing is clear from the 2022 Global Data Protection Index (GDPI) research, it is that organizations of all sizes are looking for more modern, simpler ways to protect and secure their data and enhance their cyber resiliency.

Multicloud sprawl and the lack of visibility into distributed public cloud data, a shortfall in IT skillsets and the growing threat of cyberattacks is driving demand for more flexible solutions to address an increasingly complex hybrid, multicloud data protection and cyber security landscape.

GDPI research reports double-digit increases in public cloud IaaS and PaaS consumption, with 72% of IT planners stating they are unable to locate and protect distributed data resulting from DevOps and cloud development processes, while 76% report a lack of common data protection solutions for newer technologies like containers, cloud-native apps and edge technologies. Not surprisingly, 67% of respondents indicated they lacked confidence in their ability to protect all the data across their public cloud environment. These IT “blind spots” can result in critical data loss and key digital initiatives being delayed.

Another concerning issue is the lack of IT expertise and staff that organizations point to when citing their top reasons for interest in backup and cyber recovery “as-a-Service” solutions. Increased complexity combined with a lack of IT resources means overburdened IT teams may have little time to help the business innovate if they’re too busy fighting fires.

Finally, consider that nearly one in two organizations reported experiencing a cyberattack in the last 12 months that prevented access to data. While most cyberattacks were external security breaches (e.g., phishing attacks, compromised user credentials, etc.), insider attacks also significantly increased – up 44% from last year. As a result, 67% of organizations stated they are concerned their data protection measures may not be sufficient to cope with malware and ransomware threats.

To address the systemic risks involved with pervasive cyber threats and internal breaches, many organizations are embracing the architectural concepts of a Zero Trust security framework. The challenge is that while many are planning to implement Zero Trust design principles into their environment, few have fully deployed a Zero Trust architecture (12%).

This begs the question as to how most organizations are planning to cope with the increased complexity and risk of protecting and securing distributed data in the interim. Many in the survey expressed strong interest in more automated solutions for helping them manage their critical data. For example, two of the top three “as-a-Service” solutions identified by the respondents were Cyber Recovery-as-a-Service (41%) and Backup-as-a-Service (40%). Some of the reasons for adopting these as-a-Service offerings include a lack of expertise (53%) and not enough staff to maintain these services (42%).

While public cloud and as-a-Service solutions will likely comprise an increasing percentage of organizations’ IT footprint going forward, over one-third of organizations still identify private cloud infrastructure as their preferred way to manage and deploy business applications. In this hybrid multicloud operating paradigm, organizations identified multi-workload data protection and intrinsic cybersecurity as key data management capabilities.

Another way organizations are looking to simplify data protection operations is to reduce the number of vendors with which they are working. Many believe (85%) they would see a benefit through vendor consolidation, and the research tends to support this sentiment. For example, those using a single data protection vendor had far fewer incidents of data loss than those using multiple vendors. Likewise, the cost of data loss incidents resulting from a cyberattack was approximately 34% higher for those organizations working with multiple data protection vendors than those using a single vendor.

Perhaps the key takeaway from the 2022 GDPI research is the pressing need for organizations to simplify and modernize how they manage, protect and secure their critical data assets and workloads wherever they reside. With IT skillsets in short supply, data continuing its inexorable growth and myriad cyber threats cascading across the digital landscape, IT planners need innovative solutions that deliver the simplicity, automation and flexibility to keep up with business demands as they change over time.

Dell cyber resilient multicloud data protection solutions give our customers the choice, flexibility and efficiency to protect any workload in any cloud with innovative technologies that are modern, simple and secure. It is our mission to help you stay ahead of the data protection curve so you have the confidence to spend less time on data protection infrastructure and more time developing innovative services to delight your customers.

Source: dell.com

Sunday 11 December 2022

Open RAN: Bringing It All Together

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Tutorial and Materials, Dell EMC RAN, Dell EMC Prep, Dell EMC Preparaiton

In our last blog, Operational Considerations for Open RAN, we looked at the operational capabilities telecommunications operators need to adopt in order to effectively deploy vRAN and Open RAN systems. But deployment, of course, is only the first part of the journey. Once 5G RAN systems are operational, they need to begin delivering and maintaining 5G services, which requires additional capabilities such as service management and orchestration (SMO) and intelligent control.

Service management and orchestration goes beyond infrastructure automation (covered in our previous blog) to automate how services are managed and delivered in a heterogeneous network. The standards for Open RAN, as set forth by the O-RAN Alliance and other industry groups, provide for much more automation than is found in a traditional RAN system. The SMO solution and RAN Intelligent Controllers (RICs) manage this extreme automation, which we’ll cover later in this blog.

The SMO provides a central interface for application configuration and provisioning. It also automates both infrastructure management processes and the creation of new services through southbound APIs (O2-IMS & O2-DMS). In addition, SMOs help manage and orchestrate software, including FCAPS (Fault, Configuration, Accounting, Performance and Security) over O1 interface. A third interface in the SMO, known as an A1 interface, is used to manage policies and artificial intelligence/machine-learning (AI/ML) model updates via the near real-time (RT) RIC.

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Tutorial and Materials, Dell EMC RAN, Dell EMC Prep, Dell EMC Preparaiton
Source: O Ran Alliance Software Community (SC) Documentation

One of the key advancements in the Open RAN system is the presence of new network elements known as RAN Intelligent Controllers. There are two types of RICs defined in Open RAN: the non-RT RIC, which controls applications that can tolerate one second or more of latency, and the near-RT RIC for applications that require sub-second responses, often in the range of milliseconds. RICs are responsible for collecting network data from the RAN, applying AI/ML algorithms and working together with the SMO to support closed-loop processes such as real-time video optimization and dynamic spectrum sharing.

While Open RAN systems are exceptionally flexible and scalable, they can also be complex to manage and integrate. In a vRAN/Open RAN architecture, the traditional integrated baseband unit (BBU) is now divided into several elements – specifically, the radio unit (RU), control unit (CU- Virtualized) and distributed unit (DU-Virtualized). These elements need to be integrated and validated together by the telecommunications operator or their vendor(s). In the frequent case of a multi-vendor vRAN/Open RAN architecture, this means integrating and validating components for every possible configuration.

Infrastructure isn’t the only consideration for multi-vendor integration. Multiple vendors can deliver applications such as rApps and xApps, which provide RAN intelligence for policy enforcement and analysis, thus requiring integration. The non-RT RIC manages rApps and near-RT RIC manages xApps. In addition to RIC integration and validation, these apps need to be interoperable with a wide variety of vendor solutions, including the underlying hardware for the various network elements (e.g., hardware accelerator and network interface cards) and the cloud platform software.

Creating Peaceful Co-existence with Legacy Systems


So far, we’ve talked primarily about the integration capabilities for multivendor 5G RAN systems. But what happens when 5G network elements need to interact with legacy 4G, 3G and even 2G systems? Most established telecommunications operators will have legacy RAN equipment in their networks for the foreseeable future. This can result in multiple management layers, which can create interoperability issues and exponentially increase management complexity. In order to bring multi-generational RAN systems into a unified management interface, legacy operational support systems (OSS) will need to be migrated to an SMO platform as part of wider Open RAN adoption.

While Open RAN integration is an important consideration for telecommunications operators, it shouldn’t be perceived as a barrier to innovation. Industry groups such as the O-RAN Alliance and technology leaders, including Dell Technologies, are working with telecommunications operators and vendors alike to create integrated, validated, open 5G RAN solutions. Only by opening up 5G to the broadest possible group of technology vendors can operators hope to reduce costs and accelerate innovation.

Source: dell.com

Saturday 10 December 2022

The Winning xOps Trifecta

Dell EMC Study, Dell EMC Guides, Dell EMC Career, Dell EMC Skill, Dell EMC Job, Dell EMC DevOps, Dell EMC Prep, Dell EMC Preparation

Most believe DevOps is a team within an IT organization. In reality, however, it’s an operating model grounded in a culture or mindset inspired by Agile development. DevOps is defined as the marriage of development and IT operations to shorten development timelines, speed time to market, provide continuous feedback to software developers and improve quality. DevOps is a movement, still early in its lifecycle. More than 60% of organizations report being “mid-level” in their DevOps evolution according to Puppet’s 2021 State of DevOps report. Moving this needle requires investments in tooling and supporting IT, as well as buy-in of the cultural model and workflows.

DevOps began with the premise that a business operationally needs to become more agile, more technology-oriented and able to deliver technology with a solution or product focus – largely the delivery and management of infrastructure and applications. But the xOps soup has made this a bit messy.

The Emergence of xOps


DevOps organizations have the opportunity to accelerate the transition of IT from a cost center to a critical enabler of business strategies. After all, they’re able to take the vast amounts of data within an organization and make it easier for developers to write code and create killer apps or features. But DevOps has actually been so successful that it’s given rise to more “ops,” including FinOps, DataOps, AIOps, MLOps – or what we affectionately call the “xOps soup.”

Dell EMC Study, Dell EMC Guides, Dell EMC Career, Dell EMC Skill, Dell EMC Job, Dell EMC DevOps, Dell EMC Prep, Dell EMC Preparation
Each of these operational models is meant to help streamline development tasks and efficiencies, but they’re creating more complexity and workload responsibility for IT. IT Operations has had to embrace Agile development, while continuing their day-to-day of automating tasks, maintaining infrastructure and software layers, and providing support for developer tools, more often across multiple cloud operating systems. IDC says 96% of CIOs say their role is expanding beyond traditional IT responsibilities.

Then again, the engineer in me sees a potential solution: creating a tighter interlock across three core Ops functions: DevOps, DataOps and FinOps – the xOps trifecta.

The xOps Trifecta


There are different views of xOps, but I believe these three are the core drivers. Why? DevOps handles infrastructure and application deployment, as well as management agility. DataOps automates the process of collecting, securing, storing and distributing data throughout the organization. And FinOps operationalizes procurement, financing and oversight to manage IT and cloud spending throughout the organization. That’s the xOps trifecta.

In a multicloud scenario, FinOps uses cost management as a criterion to choose where to deploy or migrate infrastructure and applications to best serve business needs. This gives the engineering teams a business-certified ROI mechanism to deliver against. DataOps helps to determine where data should live and who should have access to it.

When combined and matured, DevOps, FinOps and DataOps will enable your business to function more efficiently and in an agile fashion, with immense payoff for AIOps. Further, the trifecta will support other Ops, like MLOps, to operationalize machine learning models and GitOps to drive continuous deployments. These functions will take advantage of the operating models instituted in the xOps trifecta.

But what about security? No company today can ignore the security of their assets, applications and tools. Because of this need, security needs to be integrated into every aspect of the company’s technical and operational functions. This means it cannot be segmented toward a DevSecOps view of the world, because both DataOps and FinOps have security needs that are vital to their success, such as DataOps securing data based on the governance mechanisms applied and FinOps requiring security for placement criteria used to determine the cost data. DevSecOps is only the beginning of the security journey in this xOps world.

Those on the path to becoming a mature DevOps organization should not ignore the need to operationalize their data and use business criteria to effectively manage costs. See the whole picture. Use your DevOps journey as a springboard to understand and unify the benefits of the trifecta built on a core of security. The end game is a culture where the convergence of DevOps, DataOps and FinOps enable IT Operations, using AI to automate core IT functions and enhance the capabilities of IT professionals. This will ensure your executives’ expectations meet operational reality.

Source: dell.com

Thursday 8 December 2022

Multicloud in Healthcare: A Prescription for Operational Excellence

Dell EMC Study, Dell EMC Prep, Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Healthcare, Dell EMC Study

An endless variety of terms exist for what’s happening in today’s Healthcare IT landscape. Between private cloud, hybrid cloud, polycloud, public cloud, omnicloud and others, it’s little wonder many healthcare organizations struggle to determine the best locations to run their clinical workloads. A multicloud strategy can provide a clear way forward by making it possible to run each mission-critical workload on the optimal cloud platform today while providing the flexibility to change locations as requirements evolve.

Customer Conversations Then


Ten years ago, conversations with healthcare IT decision-makers often revolved around how they identified a particular public cloud provider. They were ready to go “all in” on the cloud and to “get out of the datacenter business.”

Their motivation was understandable. The value of clinical space in healthcare, the presumed fiscal tax of maintaining their own physical IT footprint and the desire to focus IT staff on other tasks were good reasons to consider “the cloud.”

With the benefit of hindsight, however, it is clear few fully completed that journey. Why? For some, the transition proved harder and more costly than anticipated. Unexpected fees, lengthy migrations, application dependencies, governance requirements, legal constraints and economic reality have all presented obstacles to success.

Dell EMC Study, Dell EMC Prep, Dell EMC Tutorial and Materials, Dell EMC Guides, Dell EMC Healthcare, Dell EMC Study

Customer Conversations Now


Most would agree a lot has changed since those conversations a decade ago.  If the cloud is an “operating model,” almost all healthcare organizations are “multicloud” at this point, even if they differ in their mixes of private, hybrid, public and other cloud platforms.

Over the years, new cloud players have emerged, each attempting to differentiate their offerings to capture mindshare and business within the healthcare industry. Some endure; others have gone away. How many organizations would again choose the providers they selected ten years ago? Needs and expectations have changed dramatically. Increasing co-opetition, ethical and privacy concerns, and evolving business strategies are just a few pressures that require healthcare organizations to adapt to change.

As a result, today’s more cloud-savvy healthcare technology customers talk about requirements such as:

“We want to use the best services from each provider and cannot have vendor lock-in.”

“We need our various applications to each run in the best place for them from a technology, compliance and financial standpoint.”

“‘OpEx,’ or ‘CapEx,’ works best for our organization.” (Yes, many still want CapEx.)

“We want to retain full sovereignty of our data.”

“We want our data available to (but not necessarily in) all major public clouds so we can benefit from a wide range of advanced services without paying to move data in and out.”

“We want predictable expenses.”

And, still, “We don’t want to be in the datacenter business.”

One Size Does Not Fit All


There is no single multicloud strategy that applies to all healthcare organizations. Each has different business constraints, legal requirements and compliance landscapes. Every organization has unique levels of IT talent, application mix and dependencies, and data gravity centers.

In the past, organizations fell victim to “multicloud by default.” In other words, the cloud more or less just happened. Over time, we have learned that having control over an organization’s destiny is most important to long-term multicloud success.

The first step to multicloud success is taking what is frequently called a “multicloud by design” approach. Multicloud by design is a conscious, strategic process through which an organization thoughtfully plans how it will use the cloud. Organizations taking this approach control the evolution of their cloud environment using a variety of operational, financial and technological “knobs and dials” to optimize their cloud mix over time.

Getting the toolset right for this approach can mean freedom from years of turmoil and expense associated with lock-in. Dell Technologies and its partner ecosystem offer the premier collection of “knobs and dials” for the healthcare industry, with capabilities that include:

◉ Support for CapEx or OpEx strategies.
◉ The ability to leverage solutions-as-a-service from Dell Technologies that can run in public clouds.
◉ Public cloud technologies that can run as on-premise or cloud-adjacent infrastructure.
◉ Healthcare-validated ISV platforms.
◉ Managed healthcare workloads deployed near to and accessible to public cloud providers.
◉ Operational and consulting services for planning and realizing multicloud plans.

Source: dell.com