Sunday 29 May 2022

Equipping Our Developers Inside Dell With Application Intelligence

Dell EMC, Dell EMC Study, Dell EMC Study Materials, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Preparation, Dell EMC intelligence

This is a continuation of a series of blogs about how Dell IT is Cracking the Code for a World-class Developer Experience.

As Dell Digital, Dell’s IT organization, has continued our DevOps journey and embraced the Product Model approach, the role of our application developers has shifted significantly. Developers no longer hand off applications they create to someone else to operate, monitor and manage. They now own their applications top to bottom, throughout their lifecycle. To support this, Dell Digital provides developers with the data, tools and methodologies they need to help them better understand their applications.

Over the past two years, our Observability product team has been evolving our traditional application monitoring process into what we now call Application Intelligence, a much more extensive set of tools and processes to gather data and provide it to developers to analyze, track and manage the applications they build. It helps them gain insights about application performance, behavior, security and most importantly, user experience. It is data that developers had limited access to before, since they had no reason to monitor the operations of their applications in the old operating model.

As developers began to use more data to evolve their DevOps operating model, a key challenge we had to overcome was the high level of industry fragmentation. Multiple and overlapping tools were forcing developers to change context frequently, ultimately impacting productivity. To reduce fragmentation, we have created a simplified and streamlined experience for developers to easily access application intelligence via automated processes in our DevOps pipeline, as well as self-service capabilities for them to instrument their own applications.

By alleviating the friction previously caused by using multiple tools, different navigation flows and different data and organizational structures, we are enabling developers to spend less time trying to find where performance issues are and more time on adding value.

Making a Cultural Change

Transforming our observability portfolio to meet developers’ needs required both a cultural and technical strategy.

We started with the cultural change, educating developers on where they could find the data they need to accommodate the fact they now own their apps end-to-end. Under the product model, where our IT software and services are organized as products defined by the business problems they solve, developers are part of product teams responsible for their solutions and services from the time they are built throughout their entire lifecycle. Giving them the data to better understand their products helps developers deliver higher-quality code in less time to meet the rapid delivery that the product model demands.

That was the first step to opening doors for developers with access to data that was highly restricted before. My team always had extensive observability data but fiercely controlled it because that was the operational process.

But now, we gave developers the freedom to see what was going on for themselves by ensuring the data was available for everyone in a shift to data socialization.

We spent a lot of time talking to developers about what they were looking for, what was missing and what technologies worked for them. We asked them things like, “when you wake up at three  a.m. because someone called you with a priority one issue and it’s raining fire on your app, how do we make your life easier during that moment?”

Self-service was the first thing they asked for. They don’t want to have to call us anymore to get the information they need. So, in a crucial first step, we gave developers self-service use of the tools that they need to get their job done without asking us for permission.

We also changed the focus of how we look at application data to meet developers’ needs  by making the application the center of our observations. In our traditional monitoring process, our focus was a bottoms-up view of the environment, which assumed if the infrastructure—the servers, CPUs and hard drives—was working, the app was working. However, that is not always the case.

Two years ago, we began a top-down data view, looking first at whether the app is working and drilling down to infrastructure functionality from there. The application is now the most important thing in our observability process.

Streamlining and Simplifying Tools

In the second year of our transformation, we focused on creating a centralized app intelligence platform, consolidating the number of tools we use and radically simplifying the application data process. We chose to utilize three major vendor tools that complement each other to stitch together the data collections and let users see what’s happening across the stack, from traditional to cloud.

One key tool we use offers out-of-the-box, real-time observability of all applications across our ecosystem in detail, including which apps are talking to each other. It automatically gets the data out of the applications for users. Developers just need to put a few lines of code in the DevOps pipeline for cloud native applications and it automatically maps everything out. Because it spans production and non-production environments, developers can even understand how well their applications are working while they’re coding them.

By enabling access in the DevOps pipeline, developers get continuous feedback on every aspect of their application, including performance, how their customers are using their application, how fast the pages are loading, how their servers are working, which applications are talking to their applications and more.

With this tool, they can actually go top to bottom, on every problem that is happening in the environment, for any application.

This detailed platform works alongside a second tool that provides a longer-term executive, birds-eye view of the environment. This tool provides developers and our Site Reliability Engineering (SRE) teams the ability to customize their views to exactly match how the applications are architected. We also deliver real-time data to our business units enabling them to leverage the information that we regularly collect.

If a dashboard sends an alert about a problem, developers can use the more detailed tool to see exactly what the issue is. The solutions are used in combination to provide visibility and quick access to information from an ecosystem view all the way down to the transaction level.

The third tool we offer is an open-source-friendly stack that provides a virtualization layer that reads data from anywhere and provides users with sophisticated dashboard capabilities. Developers access this tool from Dell’s internal cloud portal rather than the DevOps pipeline, since it requires users to design how data is presented.

As we transitioned to our streamlined portfolio in the summer of 2021, we actually re-instrumented all the critical Dell systems in 13 weeks, with no production impact. The transition, which would normally have taken a year and a half, was completed just in time for Black Friday.

In addition to the efficiencies gained through automation and self-service access to these platforms, developers are also able to instrument their applications by themselves, giving them the freedom to define and implement what works best for their application.

Overall, we have given developers a toolbox they can use to get the best results in their specific roles in the DevOps process without having to get permission or wait for IT. We are also continuing to improve our offerings based on ongoing feedback from developers. Or like we always say: Build with developers for developers.

Keep up with our Dell Digital strategies and more at Dell Technologies: Our Digital Transformation.

Source: dell.com

Saturday 28 May 2022

Designing a Winning Containers as a Service Portfolio

DELL EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC News, Dell EMC Preparation, Dell EMC Preparation Exam

For years telecom operators, cloud service providers and system integrators have delivered infrastructure-as-a-service (IaaS) to their corporate customers. As information technology (IT) evolves and becomes more application-centric, many customers demand new services from these providers, in particular, Container-as-a-service (CaaS) and Platform-as-a-service (PaaS).

So far, system integrators have primarily provided technical assistance to their customers, followed by fully managed services for on-premises Kubernetes installations that customers initially used for pilots, which later became the production platform for their corporate digitalization initiatives.

As part of this transformation, some corporations prefer a hosted CaaS/PaaS service rather than maintaining the new platform in their data center. To address this situation, the service provider of choice has two options:

◉ Resell a managed service from an hyperscaler with low margin.

◉ Build its own hosted service to expand its wallet share.

DELL EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC News, Dell EMC Preparation, Dell EMC Preparation Exam

Considering that 96% of the respondents in the CNCF Survey 2021 said they were using or evaluating Kubernetes, the obvious choice for service providers is to build a Kubernetes based CaaS/PaaS. Although Kubernetes and related cloud-native technologies are all open-source projects, there are reasons why service providers should think twice before building a service using only open-source projects:

◉ Support. Relying on community support for escalations is not a good idea, especially if they would like to provide SLAs to their final customers.

◉ Security. Most corporations demand complex security policies as well as authentication and authorization processes that are time-consuming to set up and maintain using the open-source Kubernetes distribution.

◉ Integration. Kubernetes itself is only a foundation of an infrastructure and development platform and needs multiple additional components, including networking, storage, load balancers, monitoring, logging, CI/CD pipelines, etc. Integrating and managing these components is complex and time-consuming.

Multiple options in the market address these issues while maintaining compatibility with the open-source Kubernetes project. VMware Tanzu and SUSE Rancher are some options, and Cloud Management Platforms (CMP) such as Morpheus also have a Kubernetes distribution that is compelling for service providers.

However, Red Hat OpenShift is the leading multi-cloud container platform according to The Forrester Wave™: Multi-cloud Container Development Platforms, Q3 2020, and has a 47% market share. Red Hat has achieved its leading position partly due to its proven track record in supporting open-source projects, but more importantly, OpenShift is a fully integrated development platform. In addition to integration with networking, storage, monitoring, and logging, it includes an OpenShift API, an administration and developer console, and multiple layers of security. OpenShift is an interesting technology option for service providers who want to offer more than Kubernetes and provide a complete development platform as a service to their customers.

OpenShift’s capability to accelerate application deployment and improve the efficiency of the IT infrastructure teams are key to getting paybacks of deployments in less than 10 months.

But as a service provider, selecting the core technology is just the first step of the journey. They will also need to decide on deployment options, security, monitoring, backup, availability capabilities, and integration with billing and other back-office processes. Dell Technologies can help with these decisions.

One of the most critical decisions is how a service provider runs OpenShift for their customers. The most common architecture is to run a single OpenShift cluster per customer. They can do this for their customers in multiple ways, including bare-metal and virtualized.

Deploying OpenShift bare-metal has license benefits for larger installations, but requires multiple physical hosts per installation and might be a too high entry point in most situations. Running OpenShift virtualized (such as on VMware vSphere) enables small deployments. It also offers the standard benefits of virtualization, including getting extra control-plane redundancy through vSphere HA and abstracting OpenShift from the underlying hardware, including firmware and drivers.

VxRail provides a stable and operationally efficient platform for service providers to start offering OpenShift as a service. Multiple independent OpenShift deployments can share the same underlying hardware, and customers can start small and grow their deployment as they develop more applications. The service provider can rely on VxRail automation to handle all platform and hardware upgrades without worrying about how these will impact OpenShift.

Service providers can build additional services by integrating Dell Technologies products with OpenShift. An option is to use PowerProtect Data Manager for backup services or offer persistent storage from Dell Technologies storage arrays using container storage interfaces (CSI) and provide additional storage features using Container Storage Modules.

Dell Technologies has a tradition of more than 20 years collaborating with Red Hat in meeting customers demand with joint solutions. And now, for those service providers which are already providing IaaS based on VMware vSphere hypervisor, we have co-created a solution that addresses above topics, accelerates the go to market and minimizes the investment.

With Dell Kubernetes for Cloud Service Providers with Red Hat OpenShift in addition to the low-maintenance, high performance that VxRail™ automation can provide, top service providers can now modernize its customers applications adapting OpenShift Container Platform to each customer size and requirements.

Source: dell.com

Thursday 26 May 2022

Simplifying Power Grid Management for Electric Utilities

Dell EMC Study, Dell EMC Career, Dell EMC Certification, Dell EMC Prep, Dell EMC Career, Dell EMC Jobs, Dell EMC Tutorial and Materials, Dell EMC Power

In response to strong consumer demand, many electric utilities are in the process of modernizing their substation infrastructure to support the increasing demand for electricity from distributed energy resources (DER). They are also aiming to introduce more advanced automation and stronger security at the substation level.

As renewable energy sources come online, power grids need to be more intelligent to support more complex distribution scenarios and ensure a real-time balance of supply and demand. Why? Because the traditional one-way, consistent flow of power from the substation to consumers is evolving. One reason for this is that solar and wind generation feed power into the electrical grid intermittently, not in a steady flow. In addition, more customers install solar panels on their homes, potentially generating more power than they consume. If this happens, a residential subdivision can become a virtual powerplant, and the local substations experience a reverse flow as they send that excess power back into the grid.

For utilities, diverse power sources and nontraditional flows make balancing supply and demand more difficult. That has prompted many of them to explore advanced technology that can help improve their forecasting and operational control while also supporting automation and security.

Turnkey Server Solution Enables Substation Modernization at Scale

Given these changing conditions, utilities need to update their advanced distribution management systems (ADMSs) to support DERs and other technologies to ensure a high level of reliability and resiliency. To better manage the larger and more diverse data streams handled by the grid control systems, many utilities are bringing more processing capability, efficient automation, and enhanced cybersecurity into their substations. Accomplishing this modernization across all substations within a utility’s area of operations can be a complex, costly, and protracted undertaking without the standardized, validated infrastructural hardware that can withstand challenging workloads.

Dell Technologies is committed to the success of the utility industry and collaborates closely with innovative, large electric utilities. We have optimized the Dell PowerEdge XR12 server as a turnkey solution for substation deployment. The PowerEdge XR12 complies with IEC 61850-3, the international standard for hardware devices in electrical substations.

Intrinsic Security to Safeguard Utility Data

Dell EMC Study, Dell EMC Career, Dell EMC Certification, Dell EMC Prep, Dell EMC Career, Dell EMC Jobs, Dell EMC Tutorial and Materials, Dell EMC Power
Electric utilities are frequent targets of cyberattacks and must address any risks associated with large-scale technology modernization across substations. For most utilities, the success of substation modernization will depend on strengthening their cybersecurity and addressing vulnerabilities at the edge and elsewhere in their environment. The XR12, designed with intrinsic security to give utilities an advantage in safeguarding their data and substation systems, can be an important element in increasing security. The advanced security capabilities of the Intel® Xeon® Scalable processor, including Intel Crypto Acceleration, complement and extend our security engineering.

Dell Technologies helps deliver intrinsic security in every phase of the server lifecycle, from design to manufacturing, through use and end-of-life. Utilities can deploy PowerEdge servers in highly secure and other environments. Our cyber-resilient architecture comprises every aspect of server design and operation, starting with the firmware and extending to the operating system, stored data, hardware components, chassis, peripherals and management capabilities. Secured Component Verification ensures that PowerEdge servers are delivered and ready for deployment exactly as they were built by Dell manufacturing.

Noteworthy security capabilities of the PowerEdge servers include encryption at the motherboard level, proactive auto-detection of approaching part failures, a secure parts supply chain, and advanced vulnerability analysis services. The Integrated Dell Remote Access Controller 9 (iDRAC9) offers utilities an arsenal of configurable server security features to help utilities reduce the number of in-person support visits. Dell OpenManage Integration for VMware vCenter (OMIVV) allows the streamlining of data center processes by enabling the management of physical and virtual server infrastructures with VMware vCenter Server.

We evolve and innovate PowerEdge security capabilities to help utility customers stay ahead of ever more sophisticated threats and risks to their systems and applications. Dell Technologies maintains stringent, auditable, zero-trust security across our global supply chain.

Optimized for Efficient Operations

Utilities can be confident that the XR12 — a ruggedized, single-socket 2U server with reverse mounting capability and a depth of only 16 inches — not only fits easily into existing substation racks but also deploys quickly and helps make better use of limited space. The XR12 offers up to two accelerator options to maximize performance and efficiency. Utilities can run multiple vendors’ software virtually on the same server, eliminating the need for a dedicated server to run each specific software application. Utilities accustomed to using dedicated servers for individual software applications will likely find that using a single high-performing device to run multiple workloads will simplify IT administration, reduce operational expenses and minimize their exposure to cybersecurity risks.

Virtualization — decoupling data and software workloads from physical hardware — is key to running several grid-monitoring software tools on a common hardware platform. It also keeps substation modernization and automation with the XR12 efficient yet scalable. Our server design incorporates virtualization technology from VMware to achieve optimal resilience and manageability of the software systems that manage substations.

Preparing for the Future of Renewable Power Distribution

Judging by the modernization initiatives underway in utilities, we expect that substations will evolve into small edge data centers that need to accommodate future artificial intelligence and machine learning (AI/ML) capabilities and significant growth in data volumes. We have already published design guidance for these more dynamic, powerful substations. They will play a critical role in delivering the intelligence that utilities will rely on as they manage grids with power from sustainable and conventional sources to reliably meet demand.

Dell Technologies OEM Solutions is committed to developing products and solutions that advance the achievement of net-zero climate targets. The Dell PowerEdge XR12 server has been validated to support substation digitization efforts that are critical for the ongoing energy transition. As we partner with many leading software vendors, we strive to ensure that the PowerEdge XR12 will operate each of their validated designs to provide their end customers with a wide range of options to solve their specific challenges.

Source: dell.com

Wednesday 25 May 2022

Predicting and Preventing Unhappy Customers Using AI

Dell EMC Study, Dell EMC Preparation, Dell EMC Guides, Dell EMC Career, Dell EMD Skills, Dell EMC Jobs, Dell EMC News, Dell EMC Power, Dell EMC Materials

Behind the scenes, teams at Dell have spent the last two years building a powerful ecosystem of tools for management of consumer (CSG) tech support cases. In this ecosystem, agent workflows are supported by data science products, including a machine learning model which predicts the cases most likely to result in dissatisfied customers (DSATs). This “DSAT Predictor” has helped managing the customer experience become a more proactive process, by increasing visibility of open cases that could benefit from additional intervention by an agent, often before an acute problem has occurred.

More Info: DES-1142: Dell EMC PowerMax and VMAX All Flash Specialist for Platform Engineer (DCS-PE)

The “DSAT Predictor” works by using data about the case, agent, product and customer to flag open tech support cases which are most likely to result in dissatisfied customer responses on a customer experience survey. Cases identified by the machine learning model are displayed to agents as a flag in a dashboard for case tracking and management. This allows tech support agents or managers to intervene earlier on potential problems, thereby improving customers’ tech support experiences.

The project’s benefits go beyond early intervention. It has also led to process standardization without sacrificing regional-specific nuances around customer preferences or expectations. The first attempts at DSAT prediction emerged organically from individual business segments. Over time, there grew to be six different predictive tools, each with its own inputs, algorithm, metrics of evaluation, architecture and deployment environment. In those early days, alignment and standardization across the business was less than ideal. As data science capabilities matured at Dell, there was an opportunity to increase alignment and standardization through a single machine learning approach.

The unification of the DSAT Predictor to a singular, stable model enabled a mirrored unification of case management processes. When the entire organization relies on the same metrics and methodologies, leadership can more easily steer the direction of the organization. Standardization can be overemphasized, however, costing the organization flexibility to regional differences. The case management organization avoids this by tailoring the DSAT Predictor to each region, allowing for the standardized predictive model and case management process to mold to regional cultural norms and business expectations.

Standardized processes and data science together allow integration of new insights and automation to continue to improve agent workflow and customer experience. Tim Lee, a process transformation consultant, noticed that the ecosystem of machine learning models has “made the business more agile” by allowing processes to be implemented organization-wide, but maintaining regional nuance through tailoring the models to reflect regional, site, or country-level differences.

Although standardization has been core to the DSAT Predictor, it works not by reducing agent autonomy but by providing an additional facet of information on which to base case management decisions. The services tech support agents provide our customers continue to be core to the business, so the case management ecosystem began with focusing on a human-in-the-loop “augmented intelligence” framework. This ensures that both agent expertise and machine learning are layered into the system to provide a warm-touch but data-driven experience.

Patrick Shaffer, Process Transformation Consultant and a key project stakeholder, describes the DSAT Predictor as “reducing guesswork and the experiential-based decisions and providing a probability of a DSAT based on years’ worth of historic data and understanding of the business.” That reduction in guesswork allows agents to focus on resolving cases more quickly and providing timely intervention when necessary.

Data science and machine learning are often championed as tools for automation but aiming to delight customers means offering both assisted and unassisted options to diagnose and solve tech support problems. Rather than reducing headcount, the objective is to provide agents the tools to facilitate their work. Optimized troubleshooting flows, next-best-action recommendations, and data-driven insights all play a role. Any additional time agents gain through greater efficiency can be spent on cases where they can have a positive impact on the outcome, such as cases where we know the customer is dissatisfied. Lee pointed out that agents are working diligently to manage multiple cases, often with exceptional attention to detail, but the “DSAT Predictor gives them a chance to bring their heads up and points out when historical trends indicate we may have missed something.”

Since the initial unification of the DSAT Predictor, iterative improvements have continued to fine-tune and improve the product. Model inputs have been aligned and standardized across the business, resulting in an 85% consolidation of inputs compared to the six precursor models. The deployment architecture and algorithm evolved to afford over 20% better model performance, with some business segments experiencing even greater improvements. And when cases are appropriately intervened upon, there is an estimated 15% reduction in dissatisfied customers. Today, the DSAT Predictor runs on more than 1.5 million service requests per year and covers all regions and business segments in the Consumer part of Tech Support Operations.

The DSAT Predictor has measurably improved case management, but Patrick Shaffer sees the current system as just the beginning of data science in case management. He points out that the DSAT Predictor “was highly focused on one facet of decision-making: making the customer experience better,” but there are numerous other challenges that are suitable for a data science solution. He aims to focus on “making agents’ lives easier” and “any time we have a decision point that needs to be made in a workflow, that’s where data science could fit in.”

The data-driven appetite for innovation is echoed across Services with a growing portfolio of AI/ML products to support customer experience.

Source: dell.com

Tuesday 24 May 2022

Rise of Data-Centric Computing with Computational Storage

Dell EMC Study Materials, Dell EMC Career, Dell EMC Skills, Dell EMC Tutorial and Material, Dell EMC Skill, Dell EMC Jobs, Dell EMC Preparation Exam

Computational Storage is the next evolution in data-centric computing to improve data and compute locality and improve economies of scale. The focus is to federate data processing by bringing application-specific processing closer to data instead of moving data to the application. It benefits the overall data center environment by freeing up host CPU and memory for running customer applications, reducing network and I/O traffic by moving processing to data, and improving security by minimizing data movement thus lowering the overall carbon footprint from a sustainability perspective.

The evolution towards data-centric computing started with “Compute Acceleration” in the last decade which focused on Application Acceleration and AI/ML, leading to an industry focus on GPUs, ASICs, AI/ML frameworks and AI-enabled applications. GPUs/ASICs are now broadly deployed in the AI/ML use cases. The second phase of data-centric computing focused on “Network and Storage Acceleration”. It started with focusing on FPGAs and SmartNICs and evolved over the last two years, with a broader industry focus on DPUs (Data Processing Units) and IPUs (Infrastructure Processing Units).

These enable disaggregation of data center hardware infrastructure and software services to enable logically composable systems and optimized dataflows. DPUs/IPUs are broadly adopted in cloud and gaining momentum in enterprise deployments. “Data Acceleration” is the next step in the evolution towards data-centric computing and Computational Storage is the key underlying technology to enable this Data Acceleration.

Dell EMC Study Materials, Dell EMC Career, Dell EMC Skills, Dell EMC Tutorial and Material, Dell EMC Skill, Dell EMC Jobs, Dell EMC Preparation Exam

The focus of Computational Storage technologies is to move computation closer to data by best leveraging silicon diversity and distributed computing. It will enable an evolution from “data storage” system of today to “data-aware” systems of future for more efficient data discovery, data processing, data transformation and analytics. In the next few years, we will see it reach a similar level of maturity and industry momentum as we see with GPUs/ASICs for AI/ML, and with DPUs/IPUs for network and storage processing.

Computational Storage Drives (CSD), Computational Storage Processors (CSP), Data Processing Units (DPU) are underlying technology enablers to move data processing in hardware and improve overall economics of the data center deployment. FPGAs (Field-Programmable Gate Arrays) will also play a role by providing a software-programmable element for application-specific processing and future innovation. These are being integrated into CSDs and CSPs for high-performance application-specific processing.

There has been industry activity across startups, system vendors, solution vendors and cloud service providers in last two years for computational storage solutions. The challenge is the integration of computational storage interfaces with applications and the broad availability of hardware acceleration capabilities in storage devices and platforms.

Multiple standards efforts are underway in NVM Express and SNIA to standardize the architecture model and command set for block storage. SNIA architecture for computational storage covers CSD, CSP and CSA (Computational Storage Array), where a CSA typically includes one or more CSDs, CSPs and the software to discover, manage and utilize the underlying CSDs / CSPs. The integrated solutions are an example of CSA. The standardization and open-source efforts will further evolve to object and file protocols since most applications access and store data using files and objects.

Since computation can only be moved to a point where there is an application-level context of data or where that context can be created, you will also see computational interfaces emerge for file and object storage systems. There are opportunities to extend the file and object access methods to federate application-specific processing closer to data and only send the results to the application. Integration with emerging software-defined databases and data-lake architectures will make it transparent for user applications that run on top of the data-lake and improve performance and economics of the solution.

The increased adoption of Edge deployments creates further opportunities to federate application-specific processing to Edge locations where data is generated. Computational Memory is also emerging as an adjacent technology to move computation to data in memory. This will enable computation in future persistent memory fabrics. HBM (High Bandwidth Memory) will be interesting not only for GPUs, but also for the data transformation engines integrated in storage platforms.

The data operations will be both fixed function and programmable. Modern storage systems are built using a software-defined architecture based on containerized micro-services. This creates an opportunity to run application specific computation in the form of secured micro-services on the horizontally scalable storage system nodes or all the way on the computational storage disk drive or computational persistent memory. We will see future databases and data lake architectures take advantage of computational storage concepts for more efficient data processing, data discovery and classification.

Dell Technologies is working with industry standards groups and partners to further evolve  computational storage technologies and delivering integrated solutions for customers. Architectures for federated data processing will further evolve in 2022 and pave the way for the next evolution in data-centric computing.

We will see “data storage” systems of today evolve to “data-aware” systems of future. These systems will be able to auto-discover, classify and transform data based on policies and enable organizations to move from digital-first to data-first organizations. The application specific data processing will federate closer to data and optimize the overall economics of data center and edge-to-cloud architectures. Stay tuned for more on this later in 2022 and 2023.

Source: dell.com

Sunday 22 May 2022

AI – Enabling 5G Superpowers?

Dell EMC Study, Dell EMC Career, Dell EMC Guides, Dell EMC Learning, Dell EMC Preparation Exam, Dell EMC Tutorial and Materials

If there is one thing to recognize about 5G is that, above all the hype, 5G has ignited Enterprise’s edge transformation plans. It shook the market with its speed records and shallow latency figures. But it was also able to convince critical business operations of its security and reliability attributes. Now, every company we talk to is working on incorporating process automation and using data for fast and consistent decision-making.

Jack of All Trades – Master of Them All

5G is a big step forward in all dimensions (speed, latency, number of users, security, available spectrum, flexibility), and has the capability to become the target architecture for the years to come. Expecting consolidation gains, many CSPs will converge legacy networks and repurpose spectrum assets in favor of 5G.

5G’s success results from the wider spectrum availability and the intelligent way 3GPP planned its releases and designed its flexible logical Tx/RX frames, allowing it to independently scale capacity, throughput, and latency. This flexibility allows it to adapt and deliver different connectivity services, always at a very effective cost.

5G’s also benefits from more advanced modulation techniques, modern electronics and its rumble approach to “Openness” – 3GPP created 5G to be open and leverage contemporary innovation streams such as Edge computing, and open IP protocols, CUPs, and open APIs. Proof of this are 5G’s expected longevity and sequential releases where 3GPP keeps adding new functionalities to enable it to serve different use cases.

AI for 5G and 5G for AI

Artificial intelligence is intrinsically embedded in many of 5G’s signal processing tasks, traffic prediction algorithms and self-optimization routines. With the use of AI, 5G networks can predict traffic patterns and electronically focus its antenna array accordingly, assuring that network resources will be always used effectively. It can also intelligently power off parts of the network equipment to save energy.

Advanced scheduling mechanisms and noise cancellation processing tasks are also 5G processes that rely on AI. By combining big data, IoT and AI, disruptive technology advancements start to revolutionize traditional industry verticals.

At the time CSPs and hyperscalers deploy the edge network to implement 5G networks, by nature, they will be concomitantly enabling the necessary computing infrastructure to host AI workloads. The available edge cloud resources will serve different players, with abundant computing and connectivity resources at the edge. Innovative business models will derive from this flexible resource assignation generating a matrix of innovative cooperation models.

At Dell Technologies, we see the combination of 5G, AI and data connectivity at the edge as a transformative platform to enable new possibilities for enterprises, governments and society in general. AI will allow machines and systems to function with intelligence levels like that of humans. 5G will connect sensors and devices at speed while AI simultaneously analyzes and learns from data, enabling real-time feedback loops.

Trends in GPU/FPGA/Other Edge AI Acceleration Approaches

When it comes to edge AI Acceleration options, there are several distinct options available. This allows for a tiering of acceleration capabilities, fitted to the demands of the AI/ML application while taking into consideration other factors like cost, space, power consumption and heat dissipation.

Additional AI/ML capabilities are also being integrated into upcoming CPU Architectures of various chipset vendors. As an example, Intel’s upcoming 4th Generation Xeon Scalable Processor will include a new instruction set for deep learning performance, with matrix multiplication instructions that promise to deliver much improved AI/ML performance available by default. This will enable the movement of AI/ML functions to the edge, without the need for add-on accelerators, such as GPUs or FPGAs, and do so in a power and space-efficient manner.

For increased AI/ML processing at the Edge, though, PCIe Accelerator Cards are the way to go. As an example, Dell’s edge/Telecom tailored PowerEdge XR11 and XR12 are ruggedized, short depth, NEBS Level 3 and MIL-STD certified servers, supporting the expansion of AI/ML capabilities via PCIe, providing a tiering of acceleration options for the edge.

3GPP Standardizes 5G AI-based Procedures

3GPP wants to ensure 5G becomes a relevant part of this AI/ML innovation fabric at the edge, and seamlessly integrates into the broader edge cloud in a more extensive manner.

Via a Technical Report (TR), TR 22.874, Technical Specification, 28.105, and the new TS 28.908, 3GPP will provide a standard approach to the complete lifecycle management of AI/ML Enabled Functions in the 5G network.

Complex network capabilities will be offloaded to AI/ML models to leverage the ability to constantly adapt and optimize for the latest network conditions. Network Planning, Management, and Performance Optimization (including SON) shows great promise of increasing  network performance and reliability while reducing the overall costs of network administration.

Also, by using open northbound open API interfaces, 3GPP will enable the formation of an ecosystem of SW developers working with supported development tools extensible for AI and ML functions.

Technology for Good

A new data-aware society is forming at the edge, with 5G and AI capabilities enabling not only unprecedented productivity levels but also long aimed employee safety standards and environmental sustainability.

Thoughtful applied technology has the power to improve operational processes in retail, manufacturing, banking, transportation, healthcare and government. It will reshape modern society’s expectations and possibilities, allow us to dream about a better world and help address some of humankind’s more challenging issues. The power of technology to transform our lives has never been so realistically reachable, but it’s still a race against time.

Source: dell.com

Saturday 21 May 2022

Reducing the Data Gravity Pull

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Preparation, Dell EMC Jobs

Flashback to when I was 10 years old. I’m trying to assemble two 11-person football teams on my football field designed rug using my toys as the players. I pull out old Star Wars figures, Evil Knievel, Stretch Armstrong and Bionic Man from the bottom of my toy chest. Luckily, my older sister didn’t throw away her old Barbie and Ken dolls, since I need them to complete my team; they made outstanding cornerbacks. While I might have changed my interest from action figures to sports, I’m so glad my sister and I didn’t get rid of our toys after only playing with them a few times.

This scenario is true. It is also a great analogy for why we collectively have a growing data gravity problem. I’ve made it my professional mission to help companies proactively solve their respective data gravity challenges (something akin to organizing and cleaning up my toy collection) before they become a completely unwieldy, data hoarding problem.

What is Data Gravity?

The concept of data gravity, a term coined by Dave McCrory in 2010, aptly describes data’s increasing pull – attracting applications and services that use the data – as data grows in size. While data gravity will always exist wherever data is collected and stored, left unmanaged, massive data growth can render data difficult or impossible to process or move, creating an expensive, steep challenge for businesses.

Data gravity also describes the opportunity of edge computing. Shrinking the space between data and processing means lower latency for applications and faster throughput for services. Of course, there’s a potential “gotcha” to discuss here shortly.

As Data Grows, So Does Data Gravity

Data gravity and its latency-inducing power is an escalating concern. As the definition of gravity states, the greater the mass, the greater the gravitational pull. The ever-increasing cycle of data creation is staggering and is spurring a sharp rise in data gravity. Consider these estimates that state by 2024, 149 zettabytes will be created every day: that’s 1.7 MB every second.

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Preparation, Dell EMC Jobs

What is a zettabyte? A zettabyte has 21 zeroes. 

What does 21 zeroes equate to? According to the World Economic Forum, “At the beginning of 2020, the number of bytes in the digital universe was 40 times bigger than the number of stars in the observable universe.”

Consequently, this data growth forecast is impactful and will continue to impact data gravity – in a massive way. As with most situations, data gravity brings a host of opportunities and challenges to organizations around the world.

Data Gravity’s Hefty Impact on Business


Data gravity matters for two reasons: latency and cost. The heavier the data, the more latency increases. More latency means less throughput – which increases costs to organizations. Reactive remedies create additional expense for businesses because moving data is not easy or cheap. In fact, after a certain amount of data is amassed, movement may not be feasible at all.

A simplified view of data gravity’s “snowball effect” is shown here:

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Preparation, Dell EMC Jobs

Balancing Data Gravity


Data gravity, however, is not all bad. It all depends on how it’s managed. As with most things in life, handling it is all about balance. Consider two scenarios in the context of data gravity: centralized data centers and data creation at the edge.

◉ Centralized data center – In a centralized data center, the data storage and servers hosting the applications and data management services are in close proximity. Administrators and storage specialists are available to keep them side-by-side. Traditional applications, such as relational databases, backup and recovery and services must be continuously updated to adapt to faster growing data.

◉ Data creation at the edge – Edge locations inherently help to achieve lower latency and faster throughput. According to IDC, data creation at the edge is catching up to data creation in the cloud as organizations move apps and services to the edge to boost compute performance. However, as edge locations grow, onsite administrators generate more data at these locations. As a result, data stores grow and grow, often in space-constrained environments. As edge locations grow, each one can compound complexity. Ultimately, organizations can evolve to have terabytes or petabytes of data spread out across the globe, or beyond (i.e., satellites, rovers, rockets, space stations, probes).

The bottom line is that data gravity is real and its already significant impact on business will only escalate over time. Proactive, balanced management today is the must-have competency for businesses of all sizes around the world. This will ensure that data gravity is used “for the good” so that it doesn’t weigh down business and impede tomorrow’s progress and potential.

A Next Step


The data gravity challenge remains, both across data centers and edge locations. In an upcoming blog, we will look at what needs to be done about data gravity – across data centers and edge locations. The data gravity challenge remains. It is location-agnostic and must be proactively managed. Left unchecked, data gravity snowballs into bigger issues. Can’t figure out how to move ten tons of bricks? Put it off and you’ll have to move twenty, forty, or one-hundred tons.

Source: dell.com

Tuesday 17 May 2022

Advance Sustainability with Data Confidence at the Edge

Dell EMC Study Materials, Dell EMC Study, Dell EMC Preparation, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Data, Dell EMC Certifications

Corporations are stepping up to invest in sustainability and limit their environmental impact. In today’s global economy, demonstrating sustainability is also critical to earning the trust and respect of customers, investors, employees and other stakeholders. One way organizations can enhance sustainability initiatives is to offset their carbon footprint by purchasing carbon credits from third-party companies that reduce or eliminate greenhouse gas emissions beyond normal business activities.

Solving the Trust Challenge at the Edge

However, as an emerging market, carbon credits are susceptible to fraud, such as falsifying or double-selling credits. Without knowing the trustworthiness of the underlying data, it can be difficult to verify credits, which, in turn, can have a devastating effect on an organization’s credibility. For example, Bloomberg reports that about 30% of companies across various industries have mismatched data in at least one emissions category. And, the Financial Review recently reported that as much as 80% of government-issued carbon credits are “flawed” and a “sham.”

Dell EMC Study Materials, Dell EMC Study, Dell EMC Preparation, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Data, Dell EMC Certifications
Carbon credit tracking needs to be infused with trust and credibility. Otherwise, it’s difficult to verify if the purchased emission reduction actually occurred.

Today, edge data is becoming more central to sustainability decision-making. At the same time, enterprise data is increasingly combined with external data to enhance analytics and artificial intelligence (AI). With large data sets coming from various sources, organizations must ensure that data coming from outside the core data center is trustworthy and transparent.

To answer this challenge, Dell Technologies has developed a technology known as a Data Confidence Fabric (DCF). DCF fulfills the four principles of data trust — attest, transform, annotate and audit — providing a standardized way of quantifying, screening, measuring and determining whether data meets your organization’s relevancy and trust standards to deliver more confident insights.

DCF for the Good of the Planet


Verifying the accuracy of carbon credits is a great example of how DCF can make a positive impact on an organization and the environment. DCF helps to simplify and speed the process of collecting, measuring and analyzing emission data at the edge for international carbon credit accountability.

Dell Technologies is working with IOTA and ClimateCheck, the creators of a digital measurement, reporting and verification (MRV) solution — called DigitalMRV — to help bolster confidence in carbon credit transactions. Built on IOTA distributed ledger technology, the first-ever built for the Internet of everything, DigitalMRV provides a network for exchanging tamper-proof data on an open, lightweight and scalable infrastructure with a high level of transparency. Combining IOTA’s distributed ledger technology and ecosystem with ClimateCheck’s 20 years of MRV expertise empowers DigitalMRV to provide data confidence. Then, adding DCF from Dell Technologies creates a trusted hardware and software data path that increases confidence and trust between carbon credit buyers and sellers.

A Sustainability Success Story


The joint DigitalMRV and DCF solution was put to the test with a biodigester project at a winery in Molina, Chile. The project recovers and processes organic residuals from the vineyard in an advanced anaerobic digester to produce biogas — a renewable fuel produced by the breakdown of organic matter — for heat and electricity, which creates saleable carbon credits.

The data inputs from the winery are directly secured at the edge by a Dell server at or near the source and then made available in near real-time through the energy-efficient and scalable distributed ledger technology. This provides a simpler, more secure and cost-efficient means of conducting carbon-capture projects. The DigitalMRV advances MRV innovations in combination with Dell Technologies infrastructure and DCF to increase the trust, transparency and utility of climate action metrics, such as carbon credits.

Looking to the Future


As data landscapes spread from the data center to the cloud and edge, ensuring the trustworthiness and transparency of data is essential to optimizing the value of data. Going forward, this revolutionary solution can be used to propel sustainability goals via carbon credits used for infrastructure, smart cities transportation and more. Dell Technologies commitment to taking action on climate change is important for our company, but it is equally as critical for Dell Technologies to demonstrate the significant opportunity technology can play to help our customers reduce their emissions.

Source: dell.com

Saturday 14 May 2022

Transformation of Reliability Engineering to Platform Engineers

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Preparation, Dell EMC Study Material

Data Integration Platforms is an internal organization within Dell Digital which accelerates business outcomes by providing secure, robust and ‘as-a-service’ platform solutions. The organization enables the enterprise applications to integrate anything, anywhere at anytime​. I would like to share a success story of how we recently transformed our process and organization by introducing a new “Service COE.” This group performs activities in service of stakeholders with the objective of enabling and accelerating the creation of products and enhancing their experience.

Product Model at Dell Digital

Dell Digital started its product transformation journey in 2019 and is committed to embracing the product-driven software development model fully by Q4 2022. Transitioning to the product model is enabling teams to deliver new business capabilities more quickly and increase our software ecosystem’s quality and security. Team members are better aligned with the business in an agile fashion, applying modern methods and technologies, allowing them to own their products end-to-end, deepen their expertise and spend more time on high-value work.

Product Taxonomy is the organization of product teams at Dell Digital in a four tier hierarchy – Product, Product Line, Experience and Domain. The structure ensures that teams working together are in proximity.

◉ Product is an independently built, delivered, managed and evolved collection of software that provides both business and user outcomes; Products have explicit responsibilities and have well-defined interfaces and workflows, be they API or GUI. The product’s entire life-cycle is owned by a single dedicated product team.

◉ Product Line is a logical grouping of related products, that delivers a cohesive business and/or user capability.

◉ Experience is a logical grouping of product lines that enables an end-to-end user outcome; an experience may be via a UI or purpose-specific APIs. A business process may be a suitable substitute for a user outcome.

◉ Domain is the overarching functional area that contains the experiences necessary to deliver a business function. It serves as a logical grouping and does not affect product management or strategy of individual experiences for day-to-day operations.

A Center of Excellence (COE) is an organization or team that supports best practice operation for product teams. These teams provide processes, training and set standards to help product teams be more responsive to customer and business needs, increase productivity and empower teams to deliver high-quality secure software. Examples are End-to-End (E2E) testing or Site Reliability Engineering (SRE).

Product Model within Data Integration Platforms Organization

There are three primary roles in the Product model – Product Manager, Engineer and Designer. The product teams use the Product Operations Maturity Assessment (POMA) tool to understand their maturity status in adopting the Dell Digital Way to develop and deliver products, measured against four levels of maturity.

Early 2020, the Data Integration Platform organization began transforming teams to operate in the product model for our seven products – B2B, Service Integration, Orchestration, Cloud Integration, Messaging, Streaming and Integration as-a-Service. These teams are supported by Reliability COE, who focus on maintaining stability and efficiency across all products, providing tools and training to enable product teams to manage their own operations. By mid-2020, product managers consolidated multiple project backlogs into a single backlog for their respective areas. By 2021, our roadmap planning process was aligned to the product model.

During 2021, while we wanted teams to be self-directed and autonomous, we realized we were not there yet. As we found cross-product issues, we took them as opportunities to automate and prevent their recurrence in a future release. We gained deeper insights into customers’ operational environment and problems they faced. We discovered opportunities to improve how our product teams and COE are organized.

Introducing the Service COE Team

The leadership team recognized the need to drive improvements across our customer experience and the need for a focused team who could keep the product teams away from the operational whirlwind and more focused on roadmap delivery. As we planned the COE construct of Product Taxonomy alignment for 2022, we envisioned a Service COE team that would enable Dell Digital through best-in-class automation and customer experience around integration products. This group would transform people from Reliability CoE to product engineers.

Dell EMC, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Preparation, Dell EMC Study Material

The key charter of this team is to:

◉ Unify the Customer engagement experience across different products, managing customer service requests through the INaaS (Integration as a Service) engagement process

◉ Enable the product teams to maximize platform engineering focus by owning Customer Engagement

◉ Accelerate automation outcomes by partnering with product teams

◉ Contribute to smart monitoring framework by developing more dashboards and monitors to improve the operations excellence

Reaping the Benefits


We launched “Data Integration Services COE” in early January, bringing team members together from Product Team and Reliability COE. Since the team’s inception, we have observed significant changes in the Product Engineering team’s ability to focus on their roadmap without distractions. Their efforts have contributed to automation and self-service capabilities improvements using the Integration as a Service (INaaS) framework. These improvements are enhancing our customer experience, improving quality and speed of product delivery and increasing the POMA maturity levels for all products.

Continuously maturing products is a key lever to digital transformation. It will optimize our organizational structure and processes, thus increasing value we provide for our customers and business partners. Our organization is strongly positioned to meet the Dell Digital targets of fully embracing the product transformation by end of 2022.

Source: dell.com

Thursday 12 May 2022

Cyber Resiliency: Protecting Critical Data to Protect Your Business

Dell EMC Study, Dell EMC Study Materials, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Learning, Dell EMC Preparation, Dell EMC Exam Preparation

As the average number of cyberattacks per company has risen 31%, the legal and financial consequences of such attacks have increased as well. The result is that, even as IT security budgets and investments grow, organizations are not feeling any more confident in their ability to protect against a malicious breach. The majority of respondents (81%) in the most recent Accenture State of Cybersecurity Report (81%) stated that staying ahead of attackers is a constant battle. At the same time, 78% said that they don’t know how or when a cybersecurity incident will affect their organizations.

The rise in volume – and value – of data makes it a prime target for cyberattackers. It’s a critical business asset and cybercriminals recognize that accessing data has tremendous financial upside for them…and an enormous downside for the compromised business. Once they gain access to your data, attackers can:

◉ Remove access by encrypting it with a key

◉ Attack data protection techniques to make sure that all restore capabilities are deleted

◉ Hold it ransom until payment demands are met

◉ Permanently delete data

◉ Sell data on the dark web

◉ Use the information to expose trade secrets or for corporate espionage

The consequences of an attack don’t just impact the business that is breached. Customers and partners can have confidential data stolen or breached as well.

Protecting Your Business Starts with Protecting Your Data

Often, cybersecurity and cyber resiliency are used interchangeably. There is, however, a very important difference. Cybersecurity includes the strategies and tools you put in place to identify or prevent the malicious activity that leads to a data breach. Cyber resiliency, on the other hand, is a strategy to mitigate the impacts of cyberattacks and resume operations after systems or data have been compromised. While cybersecurity solutions are focused on protecting systems and networks from malicious attacks, cyber resiliency helps ensure that damage from attacks is minimized.

When it comes to securing the enterprise, most IT security investments are at the network and application layers. By its very nature, cyber resiliency requires addressing the areas of your business where a cyberevent or incident can do the most damage – naturally, that involves your data.

Cyber resiliency at the data layer requires:

◉ Data isolation: Network separation is a critical component of cyber resiliency because this is the last defense at the data layer.

◉ Intelligent detection: Monitoring your data access for suspicious activity puts you a step ahead of attackers by limiting the damage.

◉ Rapid recovery: The faster you can recover data, the faster your business can be operating at the level it was at before an attack.

The most effective cyber resiliency strategies involve using best practices involved in protecting data. This includes ensuring the right level of access controls, immutable copies of data, anti-virus and anti-malware.

Cyber resiliency, and cyber recovery, are also different from disaster recovery (DR), and disaster recovery alone is not enough to ensure resiliency. When attackers target systems, data and backups they seek to encrypt the backup catalog in addition to the systems and data. DR is online and not isolated to the degree a cyber vault is, and therefore it is vulnerable to these attacks as well. Once the data, systems and backups in production and DR are compromised, the environment is unrecoverable. If the systems are not available and there is no way to recover them, then you have a significant data breach and potential data loss incident. Cyber recovery is different because it is isolated and unchangeable data, allowing you to perform recovery when the DR location has been breached and infected.

In addition, without a cyber recovery vault it takes significant time to start recovering the last backups – and you don’t know if they are good or not. There may be many unsuccessful attempts at trying to find good data before getting even some partial success. This is a very long, labor intense and iterative process that is very costly. And even after you are able to recover, you will need to figure out a way to eradicate the infection or confirm it does not exist upon restart. A cyber recovery solution solves these challenges by providing analytics and forensics to quickly determine the last known good, trusted copies to recover. Unlike disaster recovery, cyber recovery provides automated recovery operations to aid in dramatically minimizing the impact of the attack.

Dell Unstructured Data Solutions – Storage for the Cyber Resilient Enterprise

Dell Unstructured Data Solutions (UDS) enable cyber-protection and recovery solution by acting at the data layer to boost the overall cyber resiliency of business operations that depend on data. With Dell UDS, organizations gain significant advantages to minimize cyberattack risks related to data integrity and availability.

◉ Provides an isolated and operational airgapped copy of data

◉ Protects from insider attacks

◉ Creates unchangeable data

◉ Performs analytics and machines learning to identify and detect

◉ Quickly initiates recovery of trusted data

In addition to these capabilities, Ransomware Defender offers the protection of last resort, which is a copy of the data in a cyber vault that is isolated from the production environment. After the initial replication of data to the cyber vault, an airgap is maintained between the production environment and the vault copy. Any further incremental replication is done only intermittently by closing the airgap after ensuring there are no known events that indicate a security breach on the production site. Defender is a highly scalable real-time event processing solution that provides user behavior analytics to detect and halt a ransomware attack on business-critical data stored on Dell Technologies PowerScale storage clusters.

With Dell Superna AirGap customers get vault isolation for the highest level of data security by building on the security of AirGap Basic. It includes components of both Ransomware Defender and Eyeglass to ensure the secure transfer of data and the network isolation of the vault PowerScale cluster.

With Dell Unstructured Data Storage (UDS) solutions, enterprise firms can leverage a portfolio that meets the performance, scalability, and security demands of cyber resiliency. In addition to solutions with scale out architecture to enable high bandwidth, high concurrency and high performance with all flash options, UDS is uniquely suited for cyber resiliency:

◉ Recover 1 PB of data in a few hours. No other vault storage, on-prem or cloud, comes close to PowerScale’s data recovery speed.

◉ Immutability with worm lock. Data immutability makes sure attackers cannot alter or delete data.

◉ AI powered threat detection. Monitoring production data and alerting of suspicious activity puts IT a step ahead of attackers.

◉ Scalable to multiple clusters. Single pane of glass for threat detection and data isolation protects multiple PowerScale clusters.

Discover how Dell UDS helps customers boost the cyber resiliency of unstructured data by providing comprehensive capabilities to protect data, detect attack events in real-time and recover from cyberattacks.

Source: dell.com

Tuesday 10 May 2022

Algorithmic Trading Success: “Hand Off to the Machine”

Dell EMC Study, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs, Dell EMC Preparation

Quantitative finance firms are facing exponential growth in daily transactions. Their systems now process terabytes of market data and hundreds of thousands—or millions—of jobs. Their trading algorithms need to drive amazingly rapid decision-making at massive scale.

And that’s the quandary. To gain business advantage in a competitive trading market, how can you generate the required infrastructure performance while eliminating any roadblocks and distractions for your valued quant teams?

For me, it’s about eliminating boundaries between technology and trading strategy—for a simple, virtually instantaneous “handoff to the machine.” And that may be easier than you think. Quant firms around the world rely on our Dell Unstructured Data Storage (UDS) solutions to drive this approach. As a technology provider member of the STAC Benchmark Council, we continue advancing the enterprise-grade, high-performance infrastructure that firms rely on to accelerate their models and decisions.

Our approach brings together three elements to help you achieve real business advantage:

◉ Speed – with ease of use and protection

◉ Information and knowledge — from a range of unstructured data

◉ Technique – the human element

Speed with Ease of Use and Protection

High-frequency trading infrastructure needs to be fast, of course—but also easy to use and protect. Dell PowerScale scale-out NAS data storage is a proven infrastructure on all counts.

Its blazing performance (high concurrency, throughput and IOPS) and massive scale help quant organizations ask more questions and create arbitrage opportunities faster. According to STAC benchmarking research, Dell PowerScale All-Flash platforms provide:

◉ Real-time performance on smaller data sets (<10TB) and near real-time performance on large data sets (>10 TB) at ultra-high concurrency.

◉ Shorter model development times, with 7.7x faster results vs. NBBO scores.

The latest performance boost begins in July, when PowerScale F900 and F600 all-flash models introduce QLC flash drives, providing incredible economics and double the node density to support quant firms’ high-capacity financial modeling workloads. The next release of the OneFS operating system (version 9.5) planned for this year will unlock streaming read throughput gains of up to 25% or more across our PowerScale F-series all-flash portfolio.

For ease of use, PowerScale easily integrates with existing systems through a broad set of S3, S3a CAS, NFS, NFSoRDMA and SMB protocols—true multi-protocol support that (unlike some alternatives on the market) enable you to integrate the same data via ANY protocol, not just via one or another.

And we never stop deepening the comprehensive range of PowerScale data protection and security capabilities essential for your continued success. Highlights include:

◉ Building upon OneFS 9.3 and 9.4 security enhancements through planned submission to the Federal APL in a future release

◉ Comprehensive ransomware detection and remediation options (air gap, cyber vault and more)

◉ Strong authorization through External KMIP and an upcoming release of MFA

◉ Support for more than one Active Directory or LDAP

◉ Highly efficient and writeable snapshots perfect for quick dev and test branches

◉ Active-active replication with strict data consistency, streamlining data management and operations

Information and Knowledge From a Range of Unstructured Data

Combining disparate and unstructured data sources with market data gives quant firms a competitive edge. However, unstructured data sources like social media, news feeds, weather trends, event data and regulatory submissions can quickly increase your data storage requirements.

Firms can take advantage of information and knowledge that others may not be able to simply by choosing the right storage infrastructure. It is no longer sufficient to pull from third-party data sources over the Internet. The velocity of today’s algorithms requires fast access to on-premise data to take advantage of quickly moving and/or quickly disappearing opportunities.

Dell PowerScale and Dell ECS object storage facilitate rapid insights through their scale-out capabilities, multi-protocol support and easy management of incoming unstructured data.

Technique – Through the All-Important Human Element

We also believe it’s essential to hire the best people you can and give them the tools to succeed. The human element of AI and analytics reigns supreme in quant analysis, and acquiring the best talent is important to accelerating results. With the proper toolsets at their disposal, a good analyst or data scientist can build multiple strategies that generate revenues for the firm.

For example, with the right people and right technique, you’re able to understand:

◉ Which algorithm to use

◉ Tradeoffs between the speed and skill of algorithms

◉ When it makes sense to go with a simpler design

However, even the best talent can be hamstrung by poor infrastructure. Building a fast, easy-to-use data storage infrastructure that will feed the analytics and AI pipelines is paramount to increasing the efficiency of your highly valued, expert staff.

You want your analysts and data scientists working on models and defining new strategies—not searching for data, cleaning it and trying to validate the proper source of truth. These are not value-added operations.

Dell PowerScale enables humans to empower the machines of high-frequency trading and realize their potential. PowerScale’s advances in the ultra-high-performance algorithmic environments can help increase your quantitative, algorithmic and high-frequency trading team’s efficiency and performance with first-in-class data access, delivery and management.

Join us at the STAC Summits

Dell Technologies is a Platinum Sponsor at this spring’s STAC summits: in Chicago on May 10, London on May 19 and New York City on June 1. We look forward to meeting with you at one of the events to explore how you can remove the infrastructure “handoffs” to drive success in algorithmic trading.

Source: dell.com