Tuesday, 31 December 2019

NVMe and I/O Topologies for Dell EMC Intel and AMD PowerEdge Servers

If you are a server user considering Non-Volatile Memory Express (NVMe) storage for your infrastructure, then you are seeking to invest in top-of-the-line performance. Leveraging a PCIe interface improves the data delivery path and simplifies software stacks, resulting in a significant latency reduction and bandwidth increase for your storage data transfer transactions.

PowerEdge rack servers have unique configurations that are designed for specific value propositions, such as bandwidth, capacity or I/O availability. At times it can be a challenge to determine which configuration is best suited for your intended purpose!

Also Read: DEA-1TT4: Dell EMC Information Storage Management (DECA-ISM)

We at Dell EMC would like to simplify this process by providing the value propositions for each of our PowerEdge rack configurations; to help our customers choose the right configuration for their objectives. With this, we have also provided detailed illustrations of NVMe and system I/O topologies, so that customers can easily route and connect their best hardware configurations, and optimally design and configure customer software solutions and workloads.

We can first look at one of our Intel-based rack servers, the R740xd. There are two suggested NVMe and I/O configurations that have unique value propositions:

PowerEdge R740xd with x12 NVMe drives (Maximized Bandwidth)


Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material,Dell EMC Online Exam

Figure 1: PowerEdge R740xd CPU mapping with twelve NVMe drives and twelve SAS drives

This 2U R740xd configuration supports twenty-four NVMe drives. The NVMe drives are connected through PCIe switches, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, therefore enabling low latency CPU access to twelve devices per CPU. Performance can easily be scaled for various dense workloads, such as big data analytics. This configuration appeals to customers wanting to consolidate Storage media to NVMe (from SAS/SATA). Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.

PowerEdge R740xd with x24 NVMe drives (Maximized Capacity)


Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material,Dell EMC Online Exam

Figure 2: PowerEdge R740xd CPU mapping with twenty-four NVMe drives

This 2U R740xd configuration supports twenty-four NVMe drives. The NVMe drives are connected through PCIe switches, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, therefore enabling low latency CPU access to twelve devices per CPU. Performance can easily be scaled for various dense workloads, such as big data analytics. This configuration appeals to customers wanting to consolidate Storage media to NVMe (from SAS/SATA). Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.

Next, we can look at one of our AMD-based rack servers, the R7425. There are two suggested NVMe and I/O configurations that have unique value propositions:

PowerEdge R7425 with x12 NVMe drives (Maximized Bandwidth)


Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material,Dell EMC Online Exam

Figure 3: PowerEdge R7425 CPU mapping with twelve NVMe drives and twelve SAS drives

This 2U PowerEdge R7425 configuration supports twelve NVMe drives and twelve SATA/SAS drives. Eight of the NVMe drives are connected directly to the CPU and four of the NVMe drives are connected to CPU1 through a PCIe extender card in I/O slot 3. Customers supporting workloads that demand maximum NVMe and storage performance will need maximum bandwidth to drive the best throughput (GB/s) performance connect to the devices.

PowerEdge R7425 with x24 NVMe drives (Maximized Capacity)


Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material,Dell EMC Online Exam

Figure 4: PowerEdge R7425 CPU mapping with twenty-four NVMe drives

This 2U PowerEdge R7425 configuration supports twenty-four NVMe drives. Two PCIe switches are included, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, which are then connected directly to the CPU. This configuration maximizes NVMe capacity and reserves slot 3 for additional I/O functionality but has a lower overall bandwidth. This configuration appeals to customers wanting to consolidate storage media to NVMe from SAS/SATA. Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.

Each PowerEdge server sub-group has a unique interconnect topology with various NVMe configurations to consider for implementation. To achieve your data center goals with your NVMe investments, it is critical to understand your NVMe topology, as well as why it is the best option from a value prop point of view.

For the full list both Intel-based and AMD-based PowerEdge rack server NVMe and I/O topology illustrations, as well as explanations for each configurations value prop, please view the full NVMe and I/O Topologies Whitepaper now.

Sunday, 29 December 2019

Dell EMC Data Protection – Two Decades of Leadership in the Gartner Magic Quadrant

Cloud, Innovation, Digital Transformation, Data Protection, Opinions, Dell EMC Study Materials, Dell EMC Online Exam

Michael Dell unveiled a hugely ambitious moonshot goal to deliver better health care outcomes to one billion people around the world by the year 2030. Turning this near utopian dream into a reality will require dramatic, transformational changes with how health care is administered and delivered; and at the ticking heart of this program lies data – and lots of it.

Like all industries, the challenge in healthcare isn’t the lack of data, but rather the inability to consistently and efficiently protect, secure, share and ensure its compliance as it proliferates across increasingly distributed, multi-cloud computing environments.

And with organizations increasingly relying on their data to fuel advancements across every sphere of human endeavor – health care, agriculture, energy, education, transportation, manufacturing, etc. – it is imperative to safeguard information in all its forms, wherever it is located, efficiently, predictably and reliably.

That’s why we continue to double down on our investments in innovation so that we can deliver the data protection solutions our customers need to protect their critical data now and into the next decade. In recognition of our innovation and leadership in the data protection market, Gartner once again placed us in the leaders quadrant of the 2019 Magic Quadrant for Data Center Backup and Recovery Solutions! We have been awarded this highly coveted distinction in every Gartner MQ since 1999.

Cloud, Innovation, Digital Transformation, Data Protection, Opinions, Dell EMC Study Materials, Dell EMC Online Exam
While we earned our placement in the leaders’ quadrant for our track record of consistently delivering a comprehensive suite of physical, virtual and multi-cloud data protection solutions, this latest MQ did not take into account the groundbreaking announcements we made this past April – the launch of PowerProtect Data Manager and our next generation PowerProtect X400 appliance.

As a software-defined data protection solution, PowerProtect Data Manager provides support for VMware, SQL, Oracle and file systems workloads, and will soon be delivering support for container-based workloads deployed on Kubernetes.

PowerProtect Data Manager can be leveraged with our new PowerProtect X400 appliance along with our Data Domain and PowerProtect DD appliances, to deliver the efficiency and scalable performance (scale up and scale out) our customers need to meet their multi-cloud data protection requirements.

And since we moved completely to agile software development, we will be releasing quarterly updates to PowerProtect Data Manager – ensuring our customers get the best-integrated user experience possible from the Dell EMC innovation engine.

Best of all, our existing customers can take advantage of our Path to Power program to transition to PowerProtect Data Manager over time at a pace that is non-disruptive to their business while ensuring the protection of their existing investments in Dell EMC data protection technology.

As the end of the decade rapidly approaches, we’d like to express our deepest thanks to our customers for continuing to place their trust in us to safeguard one of their most precious commodities – their data. We look forward to great things ahead as we move into 2020 and beyond.

Saturday, 28 December 2019

New Dell EMC XtremIO Updates Deliver Enhanced Data Protection and Availability for Customers

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Material, Dell EMC Online Exam

Since Dell EMC XtremIO X2 was introduced at Dell Technologies World in May 2017, customers have hailed the platform’s efficient architecture, consistent availability and agile copy data management.

Dell EMC XtremIO X2 6.3, the newest version of the XtremIO software, builds upon those key attributes and can help accelerate IT transformation and data center modernization.

XtremIO X2 6.3 allows customers to employ advanced data protection models, protect their data from cyber and ransomware attacks and reduce cost through increased levels of consolidation.

Among the new features included in Dell EMC XtremIO X2 6.3:

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Material, Dell EMC Online Exam
◉ Synchronous Replication: Application availability and service level delivery are fundamental expectations of IT support, even in the event of a major outage or disaster.  To deliver even greater availability to our customers, we are pleased to announce support for synchronous remote replication. Complementing asynchronous replication, this solution delivers zero data loss between 2 XtremIO systems within ~60 miles of each other, adding protection options for those critical applications where even loss of a single transaction is unacceptable. To maximize flexibility and efficiency, XtremIO replication can be deployed on an individual application basis – synchronous replication for the most critical applications and asynchronous for business-critical applications. Further, replication can be easily switched between modes at a single click per application. Integration with XtremIO XVC copies ensures multiple versions are available for testing and full DR failover at the target site.

2x Storage Volumes: XtremIO is an ideal platform for integrated Copy Data Management (iCDM) thanks to its snap efficient shared metadata architecture, which allows the system to scale linearly regardless of primary or snap copy workloads. Now, each XtremIO system supports up to 32,000 primary/snap volumes, double the number of volumes supported by the last iteration of XtremIO software. This allows for more frequent snapshots to tighten RPOs and bring systems back online with more up-to-date data.  Additionally, application and database owners can leverage this increased scale to more actively generate and provision copies of their data to speed application development and testing.

◉ Secure Snaphots: According to the Dell EMC Global Data Protection Index, the cost of data loss is nearly 2X that of unplanned system downtime with an average cost of $996,000. With XtremIO 6.3, snapshot data can now be completely protected from deletion preventing data loss or overwrites. Customers define a retention period (for example – 7 days or 1 month) the snapshot should be kept for, after which it is automatically deleted.  Customers looking for protection against malicious intrusion, like Cyber Ransom or even accidental deletion know that their data is safe and protected. This also helps customers adhere to corporate governance and compliance requirements. To ensure the highest levels of security, even admin accounts will not be able to delete these snapshots during the retention period.

◉ Improved Efficiency and Reduction Guarantees: XtremIO now guarantees 3:1 data reduction ratio (DRR) and 5x overall data efficiency, driving additional consolidation and storage efficiency for your storage investment.

“Dell EMC XtremIO is critical to our IT infrastructure and the addition of synchronous replication, additional storage volume support and secure snapshots will help support our mission-critical applications,” states Mike Hale, Executive Director, Enterprise Architecture, Information Services, Steward Health Care System, one of the nation’s fastest-growing community healthcare systems.  “The end-to-end solutions offered by Dell Technologies, including Dell EMC XtremIO, will allow us to continue delivering the world class integrated care our patients deserve — efficiently and cost-effectively.”

Data availability is one of the most critical requirements for any storage platform and these new updates are designed to ensure that XtremIO continues to improve for our customers.  To date,  XtremIO platforms have a cumulative total of over 280 million lifetime run-time hours.

Customer availability of the Dell EMC XtremIO 6.3 software is planned for the December timeframe. Contact your customer support team for details on planning the upgrade.

Thursday, 26 December 2019

Deploying A New Surveillance System?

Deploying a new surveillance system? Test for the “what-ifs” of system-wide integration.

Accelerate time to deployment, minimize risks, and overcome complexities of surveillance system integration with the most comprehensive lab validation services in the industry.

With the solution from Dell, we can guarantee 100% uptime, no data loss, and no service disruption, which means maximum business continuity.

There can be no questions when it comes to the reliability of your surveillance infrastructure. Given what’s at stake—whether downtime in daily operations, loss of critical evidence, or worse yet, the safety of those under your watch—failure or data loss are not options. But given the complexities of today’s surveillance and IoT landscapes, what’s really being done to prevent it?

Enter Dell Technologies’ worldwide surveillance laboratories. Our team of dedicated engineers has been helping businesses minimize deployment risks while reducing overall support costs through extensive validation of virtual and non-virtual architectures for well over a decade.

While the fast-moving landscape of surveillance can leave any system with flaws and deployment risks, the unparalleled depth of our validation process combined with our network of major surveillance technology partners and industry-leading experience in complete surveillance solutions guarantees unmatched architecture support.

The Goal: Zero Data Loss


Surveillance doesn’t offer second chances. Once the data from a video stream is gone, it’s gone for good.

The way to help prevent data loss is by going above and beyond conventional testing approaches. By understanding the complete test-to-fail environment including compute, storage, networking, and software, we produce benchmarking results using true production workloads that result in a zero-data-loss system. We start by assembling your unique infrastructure and conditions, including specific ISV hardware and software, and building a performance baseline to further refine and repeatedly test against. Then we expand on that baseline up to the point where it reaches standard product and industry utilization levels and begin testing at scale, inserting errors to discover any points of failure.


Figure 1. Validation Process. An application-centric approach to determine a performance baseline under normal and abnormal operating conditions.

A completely validated system is designed to continue to capture each frame of video, even during those failure states. Whether failure stems from something like a failed hard drive, network port, controller from a controller-based platform, or single node on a scale-out storage platform, testing is designed to identify how any error impacts the rest of the system while at maximum utilization, and then we’ll go back through the workload to check for latency or loss.

Simplify Scaling while Reducing Risk and Cost


While typical approaches to system validation begin and end the process with functionality tests, allowing for a basic view to determine whether the major components of your architecture are functionally operating together is only one step in our comprehensive validation process. Our Surveillance Validation Labs take several extra steps not only to validate your current infrastructure but also to reduce your future deployment risks, further protecting your investments. As your optimized surveillance solutions are bound to change with upgrades and add-ons, our proactive approach helps our partners and customers seamlessly scale their infrastructures while maintaining low-risk and consistent architectures.

In conjunction with our industry-proven validated partners, we equip you with the tools necessary for you to simplify the process of scaling up or adding new functionality to your existing architecture. Following deployment, the Dell Technologies Surveillance engineering team will provide you with baseline data gathered during testing as well as your ISV-specific reference-architecture documentation, including white papers, technical notes, sizing guidelines, technical presentations, and best practices to aid in safely accelerating deployment.

Security and Reliability from Edge to Core to Cloud


While workloads are growing more complex, incorporating tools like high-resolution video and IoT/AI, the need to secure every bit of data is more crucial than ever, but this growing technology landscape is making that data increasingly challenging to validate and therefore protect. As the #1 infrastructure provider of global surveillance solutions* dedicated to helping bring the edge, core, and cloud components together, Dell Technologies is uniquely positioned to provide unmatched surveillance solution testing, validation, and integration support. Our Labs are outfitted with leading technology from every major surveillance vendor, allowing us to validate security applications with our extensive portfolio of products and solutions and keep your system reliable and your data safe.

Tuesday, 24 December 2019

4 New Reasons to Consider a 1-Socket Server

There are now 14 reasons why single socket servers could rule the future. I published a paper last April on The Next Platform entitled  Why Single Socket Servers could rule the future, and thought I’d provide an updated view as new products have come to market and we have heard from many customers on this journey.

The original top 10 list is shown below:


1. More than enough cores per socket and trending higher
2. Replacement of underutilized 2S servers
3. Easier to hit binary channels of memory, and thus binary memory boundaries (128, 256, 512…)
4. Lower cost for resiliency clustering (less CPUs/memory….)
5. Better software licensing cost for some models
6. Avoid NUMA performance hit – IO and Memory
7. Power density smearing in data center to avoid hot spots
8. Repurpose NUMA pins for more channels: DDRx or PCIe or future buses (CxL, Gen-Z)
9. Enables better NVMe direct drive connect without PCIe Switches (ok I’m cheating to get to 10 as this is resultant of #8)
10. Gartner agrees and did a paper.

Since this original article, I’ve had a lot of conversations with customers and gained some additional insights. Plus, we now have a rich single socket processor that can enable these tenets: AMD’s second-generation EPYC processor codenamed ROME.

So what else have we learned? First, from a customer perspective, rack power limits today are fundamentally not changing – or at least not changing very fast. From a worldwide perspective surveying customers, rack power trends are shown below:

Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certifications, Dell EMC Study Materials

These numbers are alarming when you consider the direction of CPUs & GPUs that are pushing 300 Watts and beyond in the future. While not everyone adopts the highest end CPU/GPUs, when these devices shift toward higher power, that pulls the sweet spot power up due to normal distribution. Then factor in direction of DDR5 and number of DDR channels, PCIe Gen4/5 and number of lanes, 100G+ Ethernet, and increasing NVMe adoption, and the rack power problem is back with gusto. Customers are facing some critical decisions: (1) accept the future server power rise and cut the number of servers per rack or (2) shift to lower power servers to keep server node count or (3) increase data center rack power and accompanying cooling or (4) move to a colo or the public cloud – that alone won’t address the rack power problem brewing as they too have to deal with the growing rack power problem. With the rise in computational demand driven by data and enabled by AI/ML/DL this situation is not going to get better. Adoption of 1U and 2U single socket servers can greatly reduce the per-node power and thus help take pressure off the rack power problem.

Power problems don’t just impact the data center, they are present at the edge. As more data is created at the edge by the ever-increasing number of IoT and IIoT devices, we will need capable computing to analyze and filter the data before results are sent to the DC. For all the reasons in the paragraphs above and below, edge computers will benefit from rich single socket servers. These servers will need to be highly power efficient, provide the performance required to handle the data in real-time, and, in some cases, support Domain Specific Architectures (DSA) like GPUs, FPGAs, and AI accelerators to handle workloads associated with IoT/IIoT. These workloads include data collection, data sorting, data filtering, data analytics, control systems for manufacturing, and AI/ML/DL. The most popular edge servers will differ from their DC counterparts by being smaller. In many situations, edge servers also need to be ruggedized to operate in extended temperature and harsh environmental conditions. Data center servers typically support max 25-35C temperature range. While edge servers need to be designed to operate in warehouse and factory environments (25-55C max temperature) and harsh environments (55-70C max temperature). When you reduce the compute complex from 2 processors and 24-32 DIMMs to 1 processor with 12-16 DIMMs then you can reinvent what a server looks like and meet the needs of the edge.

Another interesting observation and concern brought up by customers is around overall platform cost. Over the last few years the CPU and DRAM pricing has grown. Many customers desire cost parity generation to generation and customers expect to get Y% higher performance – Moore’s Law at work.  But as the CPUs grew in capability (cores and cost) they added more DDR channels which were needed to feed the additional cores. To get the best performance you must populate 1 DIMM per channel, which forced customers to install more memory. As the CPU prices rose with additional DRAM required, it broke the generation to generation cost parity aspect. In comes the rich 1-socket server and now at the system level you can buy less DIMMs and CPUs – saving cost and power at the node level without having to trade-off performance.

The last point customers have shared with me is around complexity reduction. Many said they had spent weeks chasing what was believed to be a networking issue when it was the 2-socket IO NUMA challenge I highlighted in the last paper. Those customers are coming back and letting us know. By adopting 1-socket servers, buyers are able to reduce application/workload complexity by not making IT and application developers an expert on IO and memory NUMA. In the last paper I showed the impact of IO NUMA on bandwidth and latency (up to 35% bandwidth degradation and 75% latency increase).

Below is a view of memory NUMA on a standard 2-socket server where we start with core0 and sweep across all cores measuring data sharing the latency. We then go to core1 and again sweep across all cores, and so on until all pairs of cores have been measured. The lowest bar is the L2/L1 sharing from a parent to its sibling HT core, the next level up is all cores within a socket sharing L3. Next level up is across sockets. And to be honest, the few that are the highest we haven’t concluded what is causing that yet – but I think you get the point – it’s complicated and can cause variability.

Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certifications, Dell EMC Study Materials

By going to single socket, IT admins and developers can ignore having to become experts on affinity mapping, application pinning to hot cores, NUMA control, etc., which leads to complexity reduction across the board. At the end of the day, this helps enable application determinism which is becoming critical in the software defined data center for things like SDS, SDN, Edge Computing, CDN, NFV, and so on.

So, what new advantages of 1-socket servers have we uncovered?

1. Avoid (or delay) the rack power challenges that are looming, which could reduce the number of servers per rack.
2. Prepare for your Edge Computing needs.
3. Better server cost structure to enable parity generation to generation.
4. Complexity reduction by not making IT admins and applications developers experts on IO and memory NUMA while saving the networking admin from chasing ghosts.

To better support you on your digital transformation journey, we updated our PowerEdge portfolio of 1-socket optimized servers using the latest and greatest features in the AMD ROME CPU. PowerEdge R6515 Rack Server and the PowerEdge R7515 Rack Server as shown below.

Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Certifications, Dell EMC Study Materials

Sunday, 22 December 2019

Lemons or Le Mans: Why Not Using A Processor-Optimized System Will Leave You Underwhelmed

Data Analytics, Opinions, Servers, Dell EMC Tutorial and Material, Dell EMC Guides, Dell EMC Online Exam, Dell EMC Prep

An Impractical Approach


Would you ever think that a smart car could beat a souped-up sports car on a quarter-mile race track? You could, by modifying the tiny smart car with an over-powered engine packed into its lightweight frame. This is a clever trick to get maximum power over a short distance. However, would you ever race one of these cars on an F1 track? Or tow a boat? Or take kids to swim practice?

Although these mental images are entertaining, a super-powered smart car does not make a useful or effective value prop for these activities. Think of the stress the engine would put on the brakes, chassis, and steering. Think of the maintenance, component upgrades, and labor that are required to operate such a car.

A Pragmatic Approach


Servers are designed and built in the same way – for specific workloads. They are not the sum of their individual components. Each piece of hardware must be optimized to work with other hardware and firmware to effectively tackle specific workloads. A powerful component without the right support does not perform at its full potential.

If you take the engine of a race car and install it into the frame of a midsized sedan there will be significant performance left on the table. This is exactly the case with dropping-in the 2nd Gen AMD EPYC processor (code named Rome) into a server that is designed for the 1st Gen AMD EPYC processor (code named Naples).

This makes one wonder about the release of AMD’s 2nd generation EPYC CPU. How will you effectively leverage this technology? Does a drop-in make technical or business sense, especially when comparing to a Rome-optimized system?

Technical Sense


If you’ve waited in line to check out at a retail supercenter – you experienced how the throughput of a system is dependent on the slowest part. This may induce anxiety when thinking about replacing old CPUs with new, advanced ones. By including a Rome processor on a Naples-based platform, you will experience lower performance, decreased capabilities, slower memory speeds, subpar networking, and limited platform scalability. Your memory and the input/output latency will slow your AMD 2nd generation EPYC CPU with 64 cores like a boy scout loaded with a fridge worth of food plus his family’s collection of kitchenware.

Business Sense


Using a server effectively also has business and financial implications. Cobbling systems can become labor intensive when a rack goes down because of an aged component. Operating costs can be 10 times higher in years 4 through 6 than the initial procurement cost of the server. Refreshing your servers around the three-year mark is shown to reduce overall costs. This is just the cost of operations. This does not account for the better outcomes and innovative solutions your employees will create, when they are free to pursue non-maintenance tasks.

Returning to the original analogy, one size does NOT fit all. The Toyota Prius could tow a boat. But why not use an appropriate car or truck? Matching the right workload with the right server will increase performance, automate management, and improve security (i.e., Dell EMC PowerEdge Servers with 2nd generation of AMD EPYC). This includes:

◉ More NVMe for better virtualization and software-defined solutions
◉ Increased cores per socket for hyper-converged infrastructure and virtual machines
◉ Lower latency, Gen 4 PCIe, with GPU slots for data analytics, artificial intelligence, and machine learning


Red Light, Green Light, Go!


Watching a smart car beat a Mustang can be entertaining, but is it a pragmatic solution for towing boats or everyday commuting? Should you drop-in a 2nd generation AMD EPYC chip into a Naples-based server? We all get excited when a new version of a technology we love is introduced. The hardest part is waiting! Dell EMC PowerEdge is releasing a portfolio of servers that are designed and optimized to leverage the full capabilities of the 2nd generation AMD EPYC processor.

Saturday, 21 December 2019

Embrace DevOps, Kubernetes and Automation with New Dell EMC Unity XT Plugins

Two major shifts are revolutionizing software development. First, the emergence of continuous delivery; and second, a microservice-based architecture that allows for greater scale.

At the heart of these trends is automation, and Dell Technologies has focused on developing integrations with leading DevOps and automation tools to accelerate application deployment and lifecycle management.

Today, we are announcing the availability of two such integrations for our leading midrange storage platform – Dell EMC Unity XT:

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Guides, Dell EMC Online Exam

◉ Container Storage Interface (CSI) plugin for Dell EMC Unity XT. Download it now on GitHub

◉ VMware vRealize Orchestrator (vRO) plugin for Dell EMC Unity XT. Available today on VMware Solution Exchange

CSI plugin for Dell EMC Unity XT


Dell EMC Unity XT’s performance, simplicity and cloud-ready architecture makes it an ideal platform to consolidate workloads for easier management and greater ROI. The Container Storage Interface (CSI) plugin for Dell EMC Unity XT extends the consolidation to containerized workloads as well.  With the CSI plugin for Dell EMC Unity XT, customers can deploy stateful applications on a Kubernetes cluster with persistent storage backed by Unity XT’s performance and efficiency. The Dell EMC Unity XT storage classes that are part of the CSI plugin easily map persistent volumes to the storage LUNs and allows dynamic provisioning and mounting of these volumes in an automated deployment workflow.

vRO plugin for Dell EMC Unity XT


vRO is an incredibly easy to use workflow builder where you can drag and drop procedural building blocks to automate end to end processes across the infrastructure stack. With hundreds of external DevOps plugins and tools such as Chef, Puppet and upstream integrations with vRealize Automation and ServiceNow, vRO is growing in popularity as a de-facto standard tool for IT operations automation. The vRO plugin for Dell EMC Unity XT includes a wide range of workflows such as Provisioning, Data Protection, ESXi host level workflows as well as a number of storage object level tasks.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Guides, Dell EMC Online Exam

Get Started today!


If you are using one of the over 47,000 Dell EMC Unity platforms and DevOps is a critical aspect of your business and IT operations, you can install these plugins today and take the first step toward building a self-service model for your stakeholders to consume IT infrastructure.

Thursday, 19 December 2019

Maslow’s Hierarchy of Needs, Applied to 5G

From architecture to operations, 5G networks have the potential to drive industries towards digital transformation in ways that we’ve not seen with prior generations of mobile technologies. Whether it be the architectural flexibility to add new capabilities incrementally (prior G’s required end-to-end generational updates), or the industry-centric perspective of 5G use-cases (prior G’s focused on singular consumer experience), I often find that my team and I focus extensively on Four Pillars of 5G Transformation:

1. Network Modernization, towards a virtualized and software-defined network

2. IT and OSS/BSS Transformation towards data-driven decisions

3. Digital Transformation, delivering joint enterprise solutions in partnership with SP

4. Workforce Transformation, focused on defining the skill sets and new operating models for realizing cost efficiencies and new revenue models

As the Dell Technologies representative in the World Economic Forum, Global Future Council on New Network Technologies, I am also exposed to the implications of 5G far-reaching beyond the technology and business opportunities being developed on 5G. The societal value and social good that can be achieved by applying cloud operating economics to mobile networking is one that I am seeing first-hand and, seeing these use-cases, provides a new sense of purpose in our 5G work.  This purpose is bigger than any technology, or architecture, or path to revenue, and it led me to reflect on Maslow’s Hierarchy of Needs, and how the work Dell Technologies performs applies directly to helping achieve that end-goal of benefitting society.  I modeled this as follows:

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Study Materials, Dell EMC Prep
Infrastructure Needs: While it might be a bit self-serving for Dell Technologies that the 5G platform necessary to achieve the top hierarchy is built on a foundation of technologies that align directly to our portfolio, there is no denying that the foundational layer of Infrastructure Needs is driven by the ability to modernize network infrastructure, with virtualized and containerized functions residing on a software-programmable architecture integrated with multiple radio technologies in both licensed and unlicensed spectrum bands.


◉ Platform Enablers: Realizing the infrastructure components as a platform enables a new set of capabilities to be exposed to services, driven by APIs that enable new service functions. Technologies such as network slicing and multi-access edge computing are paramount to creating lower-latency, more immersive, and data-driven service offerings.

◉ Services: In the context of 5G, the services layer is the unifying framework upon with new use-cases that can be built. These services – orchestration, assurance, analytics, and AI – provide the “glue” between the capabilities of the underlying individual platform functions/capabilities and the requirements of the use-cases. In other words, services make the platform enablers and infrastructure consumable.

◉ Use–Cases: We tend to focus on the capabilities of 5G to deliver new services to both consumers and industries, such as retail, energy and manufacturing.  The ability of 5G to impact society, ranging from government services to military operations, and to deliver step-function improvements in education and healthcare by integrating new technology, expanding the reach of knowledge and personalizing experiences is sometimes overlooked as we focus on nearer-term, direct correlations to driving economics.

◉ Societal Value / Social Good / Economic Benefits: Let’s not overlook, however, the benefits that a healthy, well-educated, connected society can provide. There are several studies performed over long-time horizons that highlight the benefits of education and healthcare to driving down unemployment, increasing global GDP, lowering crime rates, and improving civic engagement. By connecting today’s unconnected population with a range of new network technologies, including 5G, we are all set to benefit.

As Maslow has often described, the lower four layers of the Hierarchy of Needs are driven by deprivation – or having “needs” that are unmet. As I look at 5G positioning in the industry, from use-cases offered by service providers to infrastructure provided by vendors, we are motivated by satisfying the need to build a common platform, operate that platform in new ways, and utilize that platform to drive new service offerings across multiple industries.

Over time, however, the industry will realize these benefits and the “need” will have been met – while the benefits to society and the global economy will persist well after 5G networks have been architected, deployed, and scaled for global operations. Just looking briefly at some use-cases that we can expect to see:

◉ 5G and Education: AR/VR-based immersive experiences, personalized teaching methods tailored to learning styles, special needs assistance, etc.

◉ 5G and Government Services: Improved first-response, road safety, improved security (esp. cybersecurity), transmission of safety information, improved civil services.

◉ 5G and Healthcare: Improved remote patient monitoring, TeleHealth / TeleMedicine, movement of vital health information (such as medical imagery), robotic surgery.

It’s beneficial to put the daily grind of activities that we do for work into the context of a higher purpose – we are making an impact. Dell Technologies believes in #technologyforgood. We have already made advancements in many of these areas as we continue to keep health, education, environment and social good top of mind as we build the most powerful network the world has ever seen.

Wednesday, 18 December 2019

Technology, Globalization, and Disruptors Shape Hyper-digitalization in our Society

Never before in economic history have markets been transformed so quickly as they have now.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Tutorial and Materials, Dell EMC Online Exam, Dell EMC Guides

This is due to three factors that we have analyzed individually and know well. But it is their interaction that has ensured the high speed of the upheavals that we are currently experiencing. If one component is missing, the dynamic subsides. I am referring to technology, globalization, and disruptors.

The intervals at which new methods and technologies emerge are ever shortening. For example, the Internet of Things is navigating a challenging roadmap; it forms the basis for smart cities and the factory of the future. Artificial intelligence is becoming ever more powerful, even if it is still in its infancy and causes people to smirk along the way. In any case, it is already being used successfully today: for Siri and Alexa, cancer diagnostics, and empathic chatbots. The quantum computer, which is now causing a great stir, is on its way and will completely revolutionize IT as we know it today. The new 5G mobile communications standard will be the foundation technology for all other developments, mainly because of its low latency. The fusion of all these developments will enable further innovations, such as self-driving vehicles and intelligent agriculture solutions.

Second, globalization is a fundamental breeding ground for technological development: partnerships between universities, research institutions, and companies from around the world are becoming ever closer, enabling them to share their knowledge; within various organizations, more people from different backgrounds – and cultures – are working together. This diversity leads to considerable impulses for creativity. At the same time, new product genres, such as the smartphone, are making it a breeze to sell apps, products, and services globally. Market boundaries break down as a result: suppliers no longer come from just Augsburg or Bottrop, but they now also come from Rio and Bangalore. Even startups without significant amounts of capital are in a position to make their innovations available to a global market.

Startups are an essential part of the third component that helps transform markets: disruptors. Both startups and even companies from outside the industry are increasingly turning around established industries with their high levels of creativity and great perseverance. For example, Amazon has come out of nowhere and changed the rules of the game in retail; financial tech sector startups are threatening the banking landscape with new services; Facebook wants to introduce Libra, a digital cryptocurrency; Google and its subsidiary Waymo are pioneers in self-driving cars. And these are just a few examples.

These factors – technologies and methods, globalization, disruptors – may be familiar when viewed individually, but in their interaction they shape the society of the future: a hyper-digitalized society in which IT plays an essential role and in which, of course, the world of work will also look different. In just a few years, the close partnership between man and machine will become a reality and will ensure, for example, that employees work together in a completely new, immersive way or that more equal opportunities in the workplace will emerge as people lose their prejudices of others.

The fact is, with every economic upheaval, entire professions have disappeared, so I expect that the use of automation and AI will eliminate entire professions. But it is also a fact that new professions have always compensated for the loss of jobs. And I am sure that this transformation will also create new forms of economic activity and ways of work that will ensure prosperity.

But on the way to this society of the future, there are other serious challenges to overcome, such as burgeoning population growth, the scarcity of raw materials, and climate change. In addition, there is an IT-inherent challenge, namely a growing attack surface for cyberattacks. For society, hyper-digitalized also means hyper-attackable. And this applies not only to production lines and server landscapes but also to each citizen: will he or she become a transparent citizen in the process? No matter what, technology is already making it possible today. The GDPR does put a stop to this, but it will require a huge rethink in this regard in order for data privacy and self-determination of data and information to endure in the future. The greed of companies and countries for our data is not diminishing, and these entities have strong allies in technology: AI, for example.

Change management is key to success


Against the backdrop of the many layers of complexity here, success in the future will largely depend on the desire to transform: the digital transformation, of course, but also a personal, economic, and political transformation. When we talk about the smart cities of the future, for example, we are automatically referring to the “smart citizen” and the “smart government.” But companies must also learn to think smart, especially the former “top dogs” – the companies that often seem overwhelmed by the rapid changes in the markets. They have no choice but to question everything they found good in the past and reinvent themselves today, so they can be prepared for the future. Change management is the key to success in a hyper-digitalized society.

Tuesday, 17 December 2019

On the Edge of Technology

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Certifications, Dell EMC Online Exam

Introductions first! I work as a Technologist within OEM | Embedded & Edge Solutions, a global sales and engineering division that helps customers integrate the extensive Dell Technologies portfolio with their own Intellectual Property, products and services.

Movement towards the edge


In my role, I frequently get asked about current trends in the marketplace. While many of our customers continue to be focussed on traditional IT targeted appliances and solutions, I’m seeing increasing deployment of IT into the Operational Technology (OT) world with the two combining to deliver the Internet of Things (IoT) and Industrial Internet of Things (IIoT).

If we go back five to ten years, the momentum was all about moving applications to the cloud. However, while many environments and associated applications continue to move to the cloud, the growing trend in OT environments, like factories and transport infrastructure, is now being reversed with compute infrastructure moving towards the edge.

What is driving this dynamic?

Why? Edge solutions such as pre-emptive maintenance, autonomous or automated systems, AI, command and control systems are all driving the need for lower latency for communications and data access. We’re also seeing higher performance requirements from edge applications and demand for higher edge availability with uptime maintained with or without access to the core or cloud. Meeting these requirements is a big challenge from a far-off cloud or core data centre. Bringing IT infrastructure to the edge is the obvious solution.

Connected enterprise edge


What constitutes the edge? A typical connected edge environment consists of sensors, IT Infrastructure (compute, networking, storage, data protection) and security, both physical and cyber-security. In this blog, I’ll discuss sensors and IT infrastructure and will separately focus in a follow-on blog on the important topic of security.

Sensors


There is a vast range of sensors available today, offering many capabilities, such as component tracking, vibration, humidity, temperature, liquid detection, air quality and many more. From a Dell OEM perspective, in addition to sensors integrated into our own systems, we also use our partner network to source sensors for the solutions we design.

Today, many industrial machines and systems already include the full array of sensors with a growing number of manufacturers making the sensor data freely accessible. While there are still a number of manufacturers who continue to keep their sensor data private, end-customer pressure to enjoy open integration is significantly reducing this trend.

With the ever-increasing number of sensors now being deployed, customers need to aggregate the associated data in a managed way with only the relevant data being transmitted back to the core or cloud infrastructure for retention and analytics. As a result, we’re seeing Artificial Intelligence capabilities being implemented at the edge, core and cloud. Indeed, we already have a number of customers and partners working in this area, such as Software AG and Noodle AI.

Rugged IT infrastructure – our range of options


Moving onto IT infrastructure, our edge to the core and cloud portfolio, augmented by selected third-party partner solutions, provides customers with a complete and secure platform to deploy applications in any location.

For example, the Dell Edge Gateway 3000 and 5000 series offers an extremely rugged platform, extended environmental support from -30c to +70C in operation, extensive wired and wireless connectivity, industry certifications such as marine, military, aviation and transport plus various mounting capabilities and enclosures for enhanced protection up to IP69k. This means that our gateways can be mounted almost anywhere, allowing our customers to deploy software at the edge for data aggregation, edge analytics, control functions and device management. The Dell Edge Gateway also includes multiple options for connectivity, including LTE, WiFi, Zigbee and Bluetooth.

For less extreme environmental conditions or where greater performance or configuration flexibility is required, the Dell Embedded PC 3000 and 5000 series provides rugged features with 0C to 60C in operation, greater compute power with up to Intel Core i7 plus greater storage plus PCI slots for expansion.

To meet increasing requirements for a high-performance rugged desktop, we also offer the OptiPlex XE3 industrial grade PC, offering 24×7 duty cycle, integrated wireless, dust filters and secure chassis plus added support for GPUs. For rack mount deployments, the Precision 3930 1U workstation delivers a rugged GPU enabled 1U rack mount chassis, with PCoIP integration with Dell zero client endpoints.

Rounding out the rugged portfolio, we have the PowerEdge XR2 Rugged and the carrier-grade R740 and R640. The XR2 is designed for extreme operational environments, supporting 40G operational shock, up to 55c inlet temperature and dust filters. In addition to including certification for industrial, military and marine environment, this little beauty is also a VSAN-ready node, offering deployment high availability clustering and HCI deployments at the edge.

The carrier grade systems offer other rugged and edge features, including AC & DC power options, dust filters and fire-retardant components. As part of the OEM Embedded and Edge XL programme, we also offer up to 18-months’ extended platform operational life on the PowerEdge XL systems, compared to standard servers.

Distributed edge data centre solutions and trends


We’re currently working with many customers to create edge data centre solutions, offering various levels of performance and ruggedness, enabling our customers to stand up micro datacentres in almost any location.

Integration with VMware’s virtualisation and management solutions is also enabling the creation of hyper-converged edge platforms. For example, with edge licensing solutions like VMware ROBO, we can create high availability two node clusters that can be deployed with rugged racks harsh in environmental conditions or even wall-mounted in places like train or bus stations, airports, petrol garages and communications hubs.

For deployments with greater processing or storage requirements, we also provide modular datacentre solutions via own ESI team or via partners like Schneider, enabling the deployment of our standard non ruggedised datacentre infrastructure, such as the comprehensive PowerEdge server ranges, VxRail hyper converged platform, the range of Analytics Ready Solutions, maximum storage performance with the PowerMax NVMe platform, the Isilon Scale-out NAS Storage and many more.

The future


Where to next? I believe that this relentless movement of compute, analytics and storage to the edge will continue to accelerate. In this blog, I’ve described some of the technologies and initiatives that we are providing to enable customers to innovate faster, build industry-leading solutions and scale. Watch out for my follow-on blog on security at the edge.

Saturday, 14 December 2019

Embrace DevOps, Kubernetes and Automation with New Dell EMC Unity XT Plugins

Two major shifts are revolutionizing software development. First, the emergence of continuous delivery; and second, a microservice-based architecture that allows for greater scale.

At the heart of these trends is automation, and Dell Technologies has focused on developing integrations with leading DevOps and automation tools to accelerate application deployment and lifecycle management.

Today, we are announcing the availability of two such integrations for our leading midrange storage platform – Dell EMC Unity XT:

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Certifications

◉ Container Storage Interface (CSI) plugin for Dell EMC Unity XT. Download it now on GitHub

◉ VMware vRealize Orchestrator (vRO) plugin for Dell EMC Unity XT. Available today on VMware Solution Exchange

CSI plugin for Dell EMC Unity XT


Dell EMC Unity XT’s performance, simplicity and cloud-ready architecture makes it an ideal platform to consolidate workloads for easier management and greater ROI. The Container Storage Interface (CSI) plugin for Dell EMC Unity XT extends the consolidation to containerized workloads as well.  With the CSI plugin for Dell EMC Unity XT, customers can deploy stateful applications on a Kubernetes cluster with persistent storage backed by Unity XT’s performance and efficiency. The Dell EMC Unity XT storage classes that are part of the CSI plugin easily map persistent volumes to the storage LUNs and allows dynamic provisioning and mounting of these volumes in an automated deployment workflow.

vRO plugin for Dell EMC Unity XT


vRO is an incredibly easy to use workflow builder where you can drag and drop procedural building blocks to automate end to end processes across the infrastructure stack. With hundreds of external DevOps plugins and tools such as Chef, Puppet and upstream integrations with vRealize Automation and ServiceNow, vRO is growing in popularity as a de-facto standard tool for IT operations automation. The vRO plugin for Dell EMC Unity XT includes a wide range of workflows such as Provisioning, Data Protection, ESXi host level workflows as well as a number of storage object level tasks.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Guides, Dell EMC Certifications

Get Started today!


If you are using one of the over 47,000 Dell EMC Unity platforms and DevOps is a critical aspect of your business and IT operations, you can install these plugins today and take the first step toward building a self-service model for your stakeholders to consume IT infrastructure.

Thursday, 12 December 2019

Delivering a Modern Streaming Architecture for 5G

What is the Use for Data Analytics?


Analytics offers many benefits to organizations as they embark upon digital transformation, including:

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Certifications, Dell EMC Online Exam, Dell EMC Tutorial and Materials

◉ Increasing efficiency and driving cost out of operations
◉ Maximizing customer satisfaction
◉ Developing new products and services
◉ Use of streaming data to respond to issues and opportunities in near real-time

The number of use cases made possible by data analytics seems limitless and, on top of that, we are only now beginning to glimpse the potential of machine learning and other forms of artificial intelligence to open new frontiers of what organizations can achieve with data.

But as we at Dell Technologies engage with customers on a variety of use cases, the more we learn that many are still struggling with the prerequisite task of getting data into their analytics environments in order to deploy the use cases they want. This task is called ETL, or “extract, transform, load”, and it can be defined as the process of reading data from one or more data sources, transforming that data into a new format, and then either loading it into another repository (such as a data lake) or passing it to a program.

Dell Technologies and its telecom data analytics ISV partner, Cardinality have been working together to help customers resolve complex ETL issues so that they can do the kind of analytics they want. What follows are real-world examples that illustrate three key pain points customers tend to experience with ETL, and how we have helped resolve them.

Data Myopia


Telcos sit on a wealth of data, but organizational or technical barriers can often make it difficult for data engineers and data scientists to gain access to the data they need. The data analytics team at one tier-1 telecom operator faced just such a challenge. Only able to access data from the IT environment, the team couldn’t get the data they needed to start answering questions about the factors that influence customer satisfaction. To solve this problem, Cardinality conducted a pilot on a small footprint of Dell EMC PowerEdge servers to demonstrate to the Network Operations team the value that could be unlocked with a simple use case: device analytics. In a matter of days after configuring its ETL Engine to ingest data from the operator’s network probes, Cardinality was able to produce a real-time dashboard of all the mobile phones and other devices on the network, and show vital information such as types of SIM cards the devices where using and which could be upgraded to 4G networks. This operator was able to build on this initial use case to create a complex, Network Customer Experience use case that delivers measurable business benefits by using machine learning to analyze over 350 network KPIs in order to predict and circumvent customer churn.

Creeping Complexity


New technology spaces typically offer developers a wealth of tools to choose from. Many tools, both open source and proprietary, exist in the world of data analytics (e.g., Informatica, Talend, Kafka, StreamSets, Apache NiFi, Airflow, and many more). While choice can be good, the use of too many tools by too many different people in a single environment can make management a costly ordeal.

One telecom operator that Dell Technologies recently worked with had fallen victim to the creeping complexity that can be introduced when there is too much choice and too little control. Over time, different developers decided to use whatever “flavor of the month” tools looked interesting to them, and this resulted in a situation where it became next to impossible to debug existing use cases and create new ones.

Dell Technologies and Cardinality were able to quickly clean things up with the Cardinality ETL Engine, which provides an elegant and easy-to-maintain mechanism for ingesting data. The result is that the operator is now able to build use cases without having to worry about the complexity of ETL.

Data Indigestion


A variation on the complexity theme has to do with the complexity of data sources themselves.

Dell Technologies helped another customer that was saddled with having to keep up with a variety of data formats from different network probes. Having multiple probes is complicated by the fact that probe vendors occasionally change their data formats, requiring rework and telecom expertise to reformat the data into formats used for analytics. An additional problem is that some older, proprietary data formats can’t always be used with newer ingestion tools, introducing latency and performance limitations, and this ingestion “indigestion” can limit the kinds of real-time use cases that can be put into production.

By modernizing the customer’s environment with the Cardinality ETL Engine, we were able to relieve the customer of the headache of having to manage a multitude of data sources and were further able to vastly improve streaming performance. The number of data records ingested and parsed per day increased from 9 billion to 23 billion and the number of files needing to be discarded due to format quality issues dropped to nearly zero.

“Plumbing” Matters


The Dell Technologies Service Provider Analytics with Cardinality solution dramatically reduces customers’ data ingestion pain points with an ETL Engine that allows customers to:

◉ Get into production fast with “out-of-the-box” ETL functionality that is purpose-built for telecom environments

◉ Collect streaming and non-streaming data with the low latency and high throughput

◉ Lower OPEX by reducing the resources needed to manage multiple data formats from different sources

◉ Scale from small to huge on a unified data analytics platform

Dell Technologies and Cardinality offer customers a Kubernetes-based microservices platform that spans the data pipeline from ETL to analytics to prebuilt telecom use cases is tuned to run on scalable, high-performance Dell EMC PowerEdge clusters and is integrated with Dell EMC Isilon and Pivotal Greenplum. Together, Dell Technologies and Cardinality are committed to ensuring they can make the most out of data analytics.

Wednesday, 11 December 2019

Transformation in Action: The McLaren Digital Transformation Story

Just over 50 years ago, Bruce McLaren and his newly formed racing team set out to design and build Formula F1 racing cars that would bear the McLaren name. Armed with an immense passion for racing and a willingness to push the boundaries, the McLaren team work tirelessly to perfect their racing ability and dominate the sport. With 20 World Championships and 182 Grand Prix victories, the McLaren name is synonymous with the winning spirit of auto racing.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Certification

Today, McLaren’s success extends far beyond the race track. Leveraging the same sensor technology that powers their race performance, McLaren is using predictive analytics and biometric data to drive game-changing innovations across other industries.

I recently had the opportunity to catch up with Edward Green, Principal Digital Architect of The McLaren Group, to discuss McLaren’s digital transformation journey and what they learned along the way.

When did you realize that digital transformation was no longer an option for McLaren? What was the driving factor?


Digital transformation has been a natural part of our business for over 30 years, and as such has become part of our core DNA. From the early days in F1, we have always sought to find a competitive edge, capturing, analyzing and presenting data to find marginal gains on track. Transforming what we do has to have a direct correlation to on-track performance.

Off track we have simulated, modeled, and predicted data, not only in F1, but across industries. We then applied this learning to other sectors including healthcare and public transportation.

Across the rest of the business groups we must react to the pace of change and the level of customer demand. Hiring staff and developing talent has put pressure on our systems and tools. As such, transforming the way we do business is critical to ensuring we can sustain a pace of growth and satisfy demand.

Operational technology and processes are often looked at first as a way to make efficiency savings, which mean we can operate in a more agile and responsive manner to our business requests.

What were the biggest obstacles you encountered as you began your own digital transformation? How did you overcome those obstacles?


Our legacy is growing and will continue to do so. It is hard to find the time in a busy technical landscape to migrate legacy systems used by few users. In some cases, those users are historic racing cars, which we need specialized dedicated hardware to keep running for classic and heritage display events.

This continues inside the enterprise as more and more applications are bespoke and custom in-house developments by software teams are made. We are looking to take API driven approaches across systems in places where we can make use of common systems at a group level. Rather than replacing, we will look to renovate and create digital foundations which we can then build upon.

Building relationships with end user communities through direct technical peering, or leveraging our partner community of experts, enables us to talk the same language as our diverse user communities. In addition, we have made investments in our technical estate with Dell and our VXFlex environment, allowing us to create a digital platform, which can provide cloud-like services for users, whilst harnessing performance and providing low latency for on premise systems.

One of the biggest obstacles continues to be cybersecurity, which can often seem to slow down transformation. We have partnered and bolstered internal capabilities. Our logs and cloud infrastructure are analyzed by our partners at SecureWorks, whilst teams onsite work directly with users. With confidence that our partners are keeping us secure and providing actionable insight, our cyber teams can work with users to help understand their case for transformation or where legacy systems need to be migrated.

What has been the biggest improvements McLaren has seen in their business because of your digital transformation? Any improvements you did not expect?


Our culture has become more open and users spend more time engaging with each other around business challenges and not solely technical issues. As an IT team, we feel closer to the business and staff can now see or recognize the value of their work.

With better planning, we have improved agility to respond and design future state architectures for the business to make better use of. This gives us greater time to react to market changes and stay ahead of our competition.

An improvement we didn’t expect was our use of estates and facilities.  We are now using more collaborative areas and work is now taking place in multiple locations. As more data and processes are transformed through automation and AI, we are spending less time working on data collection or processing and more time collaborating on the output of data.

We have also noted closer relationships between teams and within departments, resulting in tighter service delivery and easier support models.

What advice would you give another business just starting their own digital transformation journey?


The journey doesn’t stop! That would be the first piece of advice I would give someone looking to start their own transformation journey. That might sound scary, but it really does bring a new way of thinking and approach to business challenges. The results that come from this approach are faster and the solutions are often delivered before they would traditionally be deemed ready for consumption.

Find the right people, empower leaders and encourage change. Transformation is not just about technology, building trust and finding the right people is critically important.

Work closely with estates and facilities, much of the transformation activities will result in both a physical and digital change. Working as a team helps make the implementation of new solutions or environments frictionless and provides a more seamless organizational impact.

Finally, digital transformation means you might find yourself talking about domains or areas of business which seemingly have little to no relevance to your day job. They might seem disconnected or irrelevant, although often lead to a better ability to put yourself in someone’s shoes for the day, or better understand their business challenge or use case.

McLaren is a great example of the success that can be realized when using data to drive better business outcomes. Transforming your business and putting data to work for you can be a complex process that can fundamentally change how you operate. As with McLaren, using data can open new doors to business success. Mr. Green said it best, “Digital transformation is about having fun and creating change, it should make you curious and provide a positive business challenge.’

Tuesday, 10 December 2019

Trading Places – Financial Sector Upgrades to Larger Monitors to Accommodate Shrinking Workspaces

Many of us spend our workday in front of a monitor, and this couldn’t be truer for employees within the financial sector. In fact, traders typically manage 8 to 12 monitors at a single desk in order to capture all the details that are mission-critical for their work.

Over the last few years, the financial sector has moved to larger and higher resolution monitors such as Dell 43 Ultra HD 4K Multi Client Monitor (P4317Q) and Dell UltraSharp 49 Curved Monitor (U4919DW), to replace some of the smaller monitors that traders were previously using.  A recent Dell-sponsored IDC research study found that more than 80% of employees surveyed believed that monitors with bigger screens help improve productivity at work. Immersive technologies will continue to drive demand for high-performance monitors with higher resolution, larger screen sizes and newer form factors to support rich content and workloads that include a variety of data-centric tasks.

In many workplaces, including the financial sector, workspaces are shrinking as offices modernize and seek to maximize people per square foot. This was the inspiration for Dell UltraSharp 49 Curved Monitor– the world’s first 49-inch curved dual QHD monitor.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, EMC Tutorial and Material, Dell EMC Online Exam

One of our customers in the financial sector came to us with a desire to redefine their traders’ work desks. They wanted a simplified and clean desk for traders without compromising their visual experience.

Focusing on traders’ key needs — large screen space, crisp images and an excellent viewing experience — we developed an ultra-wide monitor at a high resolution that is curved and height adjustable for ease of viewing. Dell specifically created this panel and helped develop critical components along with key technology partners to bring the U4919DW Monitor to market. This monitor offers more screen real estate to view content, dual QHD resolution for striking clarity and delivers a truly immersive experience.  A Dell-commissioned Forrester study concluded there was a 12% productivity gain when traders switched from four 19-inch FHD monitors to two 34-inch WQHD (larger screen size, higher resolution) curved monitors, resulting in nearly 100 hours of annual incremental productivity per trader.

“We don’t do a lot of financial trading here, although we do some killer spreadsheet work and a little media. We publish quarterly reports; Market Watch is our oldest and best known. It was (and still is) an awesome and unforgettable sight to see 14 years of quarterly data uninterrupted with associated charts being displayed on the Dell UltraSharp 49 Curved Monitor. The more you can see, the more you can do,” said Dr. Jon Peddie, president of Jon Peddie Research.

That said, larger screen sizes require more efficient management of screen real estate to maximize productivity. The Easy Arrange feature on Dell Display Manager (DDM) specifically addresses this need by offering customizable Easy Arrange layouts. This allows users to organize multiple applications on the screen and snap them into a template of their choice, making multi-tasking easy and effortless.  You can even use a hot key to toggle between the layouts.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, EMC Tutorial and Material, Dell EMC Online Exam

For IT decision-makers, Dell Display Manager enables smarter centralized management of display assets and inventory allowing IT admin to control the monitors remotely. Imagine a typical trading floor where IT admin can remotely switch monitor(s) to standby mode after trading hours and turn them on the following day. IT managers and end-users can both expect to improve their productivity with the newly updated DDM.

In finance, we know that time is money and a high-quality display provides traders with a clear view of fast-moving market activity. Make sure you don’t miss critical trading opportunities while changing screens or programming your settings. By optimizing your workspace, you’ll reap the benefits quickly.

Sunday, 8 December 2019

Huron Digital Pathology and Dell Technologies Accelerate Adoption of Digital Pathology and Artificial Intelligence in EMEA

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Online Exam

Thanks to population increases, an ageing population and a rise in chronic disease, the global healthcare industry has no choice but to become more effective and efficient in the way it operates. Added to that, there is a growing shortage of healthcare workers.

Cancer cases on the rise but fewer pathologists


For example, in Europe, the number of cancer cases continues to rise while the number of pathologists is declining with a decreasing percentage of medical graduates choosing to specialise in that area. As a result, the waiting times for patient diagnosis is growing, sometimes taking six to seven weeks to receive results This is not good for patients as it means delaying the start of important treatment.

Technology to plug the gap


While there’s no single panacea to this complex issue, I believe that technology has an important role to play in driving efficiencies as well as freeing up workload. I discussed how IT technology is transforming healthcare by speeding up genome sequencing, leading to faster and more accurate diagnosis. Today, I want to examine the role of digital pathology, machine learning and Artificial Intelligence in modern clinical practice.

The traditional process


We’re all familiar with the traditional process of providing tissue samples for testing. Once collected, the sample is sent by the GP for analysis in a lab, where it’s placed on a glass slide to be examined by a pathologist under a microscope.

When additional opinions are needed, this typically involves the glass slide being transported to another hospital by courier or taxi. Challenges include lost slides in transit, delays in communicating results to patients plus the whole logistical challenge of indexing glass slides and keeping track of where each sample was at any given moment in time. As you can imagine, this isn’t an efficient process. I’ve heard that one pathologist actually wrote his own programme to track where the slides were on their journey!

And, of course, not being able to have all the experts share and comment at the same time is not only inefficient due to the increased time involved in writing and reviewing reports but doctors are also missing the opportunity to react to someone else’s view in real time and have an interactive discussion.

Transforming glass slides into shareable knowledge


The good news is that digital pathology has already disrupted this workflow. Whole-slide imaging, the availability of faster networks, and cheaper storage solutions have made it increasingly easier for pathologists to manage digital slide images and share them for clinical use, enabling real-time consultation and decreasing the time it takes for the patient to receive an accurate diagnosis.

 Apart from improving the patient experience, this has proven particularly helpful for smaller hospitals that wouldn’t normally have access to high-quality research expertise. From a training perspective, digital imaging technology also provides a way to preserve, share, duplicate, and study a specimen, benefiting medical researchers, and scientists.

The next stage in digital pathology


However, up until this point, adoption levels have been relatively slow as hospital labs already have microscopes in place and the process of preparing the glass slides prior to scanning still needs to be done. And so, while digitising slides has been a hugely significant development, in many ways, sharing is only half the solution.

What if technology could work with any digitised images, index them and make them searchable? What if Artificial Intelligence could interrogate multiple libraries of images so that when a clinician detected a tumour, the database could be searched to find all similar tumours? The clinician could then evaluate the treatment and subsequent outcomes before designing an effective personalised treatment for the patient.

First-ever image search engine for pathology


Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Online Exam
The good news is that a new, cost-effective solution – based on standard IT computing, which can run alongside existing scanners and PACS – is now available. Huron Digital Pathology has worked with Dell Technologies OEM | Embedded & Edge Solutions to co-develop a reference architecture, effectively an IT-based appliance solution, which works in conjunction with existing equipment.

The Huron AI-powered image search software solution searches and identifies similar tumours, along with the reports and diagnosis of other pathologists in seconds, giving multiple opinions to aid the pathologist in making a decision. The solution powered by Dell EMC server technology is based on Intel architecture with GPU cards from NVIDIA, complete with a pathologist viewing workstation and Isilon storage.

I believe that this solution will emphatically prove the business case for digital pathology, free-up pathologists to take on additional cases, enable better primary diagnostics, create a platform for more effective multidisciplinary teams. and over the longer-term, enable new breakthroughs in patient treatments.

Looking ahead


This addition of machine learning, and AI to digital pathology is opening the door to new advances in personalised medicine. The ability to mine features from slide images that might not be visually discernible to a pathologist, also offers the opportunity for better quantitative modelling of diseases, which should lead to improved prediction of disease aggressiveness and patient outcomes.

An established player in the US and Canada over the last twenty years, Huron’s goal is to work with Dell Technologies to make digital pathology, machine learning and AI ubiquitous in hospitals and research institutions throughout Europe. Our team is proud to contribute to this development. I believe that this is a great example of how IT technology can be a force for good, driving human progress and making a real difference to society.