Thursday 30 January 2020

What Cities Can Teach Us About Technology Design

When it comes to design, inspiration can be found in the most unlikely places. Take designing complex IT systems. You can learn a lot about how to make your company’s technology work by examining city street plans. Don’t believe me? Let’s take a look.

While I love my city, Seattle, dearly, I find its physical layout incredibly frustrating. At times, there appears to be no logic to the way the city is organized. We have multiple grid patterns, oriented at different angles and intersecting in a crazy mesh of streets and avenues. Streets will suddenly veer into a new direction, change names, or stop entirely only to resume a mile down the road.

Converged/Hyperconverged Infrastructure, Data Center, Dell Technologies, Cloud, Opinions, Dell EMC Prep
How did this happen? The answer to that question reveals a lot about how complex systems evolve and how your initial mistakes can cascade over time to create a tangled web. Chances are, you may recognize some of the factors that make your own company’s IT such a patchwork quilt. I break these into three big lessons.

Lesson 1: Lack of cohesive planning


The story of modern Seattle begins at the turn of the twentieth century, as the city went from being an isolated hub for the lumber trade to a larger, more populous community. It became clear that the town needed to grow, but there was no consensus on what that growth should look like.

Two of the community’s leaders, Doc Maynard and Arthur Denny, each had their own vision for expansion. Rather than uniting forces or hashing out a compromise, they each launched their own, disconnected projects. Those grids that seem to intersect at crazy angles? They are the direct result of these incompatible expansion strategies.

These leaders had divergent perspectives on how they wanted the city to work. Unsurprisingly, this created a chaotic, broken-grid city layout. The implications of this are still being felt today, more than a century later.

Think about your own company’s technology. Much of your infrastructure and design probably has similar history, with multiple people planning along very different lines. Cloud has only further complicated things as the iron grasp of IT over infrastructure has loosened and lines of business and developers have had the ability to blaze their own trail one credit card swipe at a time.

The consequences of disjointed planning are immense, far-reaching, and enduring. This fragmentation makes it much more difficult to onboard new technologies, slowing down progress and innovation. It also leads to greater cost and inefficiency in managing infrastructure, as well as introducing substantial systemic security risk.

Lesson 2: Failure to anticipate future requirements


Converged/Hyperconverged Infrastructure, Data Center, Dell Technologies, Cloud, Opinions, Dell EMC Prep
One of the hardest design challenges is to look beyond the needs of the moment and anticipate the future. People are simply not very good at predicting the big, disruptive waves that will shape their world. This creates challenges in systems that are forced to adapt to requirements and technologies for which they were never designed.

This was certainly the case in Seattle’s development. Much of the city’s downtown structure was defined at a period when the very first automobiles were just starting to come into being. Another core modern technology, electricity, was also just beginning to see widespread use. Had these early builders more foresight, they would have anticipated that cars would become ubiquitous and that every home and business would need to be electrified. They would have built the infrastructure to support these technologies.

The history of Seattle is rich in unintended consequences caused by misreading future needs. By the turn of the twentieth century, the city had an enviable public transit system with over 50 miles of cable car track. When budget challenges hit in the 1930s, the streetcars were sold for scrap and the track ripped out. This set back public transit in the city severely. It would be several decades before Seattle would see as extensive a rail system return.

Consider your own company. Has it ever made decisions based on short-term needs that created long-term challenges? There are a lot of organizations out there that are still reckoning with heavyweight, expensive, and inefficient CRM and ERP systems. These systems made sense at the time but now look costly and obsolete, yet are seen as too expensive and difficult to abandon. Instead of bringing in cutting-edge new technologies, some companies will keep relying on legacy applications to save money. Others might move to the cloud due to the low upfront cost, only to be surprised by the long-term expense and complications.

Lesson 3: Building systems of insufficient scale


Converged/Hyperconverged Infrastructure, Data Center, Dell Technologies, Cloud, Opinions, Dell EMC Prep
It isn’t just that Seattle’s early leaders failed to anticipate the technologies and changes that would define their city’s future. They also had no concept of the sheer scale that their city would someday achieve. This is far from surprising – it would have been remarkably prescient if they had known – but that gap in their design has huge implications.

In 1880, the city was home to around 4,000 people. Just ten years later, the 1890 census counted over 42,000 residents. That explosive growth would continue, with the city passing 80,000 by 1900 and 235,000 by 1910. Today, Seattle alone, separate from the larger metropolitan area around it, has over 720,000 people. Those people are still drawing on the same physical space and depending on many of the same elements of underlying infrastructure (e.g. roads, bridges, etc.).

When you take a system designed for one level of capacity and force it to handle vastly greater numbers, you often see serious performance issues. This is true whether we are talking about Seattle’s antiquated plumbing system or your own company’s technology infrastructure. Your systems may not have been designed for the number of users or customers that you now face. This creates serious challenges in availability and speed.

One clear place where we see scale problems in enterprise IT is around data. The rapid growth in the volume, variety, and complexity of data has strained almost every company’s technology infrastructure and systems. Few companies anticipated this data load, or the number of processes and workloads that they would need to digitize and manage in the era of digital transformation. As ever more of the business goes digital, existing infrastructure and systems will be stressed to their limit.

Conclusion: Making the best of imperfect systems


System planning never works the way we might wish it did. We almost never get to start over with a clean slate; we inherit the decisions of those who have come before us. Seattle’s city leaders and planners would probably love to have energy-efficient, modern smart grids, and a more rational road system, but they must do their best with what they have. They can’t simply shut down the city and start over.

You likely have your own version of this challenge when it comes to your business. For success, you need to meet your technology, processes, and people where they are today. Start by reviewing your current applications and deciding what to retire, migrate, and re-platform.  Look at your existing people and their skillsets and ensure you have the right competencies in place ahead of advancing your cloud ambitions. When it comes to processes avoid taking on sweeping re-platforming initiatives, like containerization, all at once. Instead, take a sustainable and gradual approach that minimizes disruption and risk to the business. By making smart choices, you can evolve your technology infrastructure without hurting your business and your stakeholders today.

Tuesday 28 January 2020

Benefits of Obtaining Your Dell EMC Advanced Analytics Specialist Certification

Now pass Dell EMC E20-065 exam and prove your skills against DECS-DS Dell EMC Data Scientist Advanced Analytics Specialist exam is an assessment test to evaluate your prior knowledge about the DECS-DS and technical support. Every professional who wants to grow in the IT field intends to try and pass the Advanced Analytics Specialist certification exam.

advanced analytics certification, data science advanced analytics, data scientist certification, E20-065, Dell EMC Advanced Analytics Specialist

Dell EMC E20-065 Advanced Analytics Specialist Certification has different objectivity for the certified Specialist. Every objective has its value, the content for Data Scientist Advanced Analytics Specialist covers, Hadoop Ecosystem and NoSQL, Natural Language Processing, and Social Network Analysis.

Attendees

This Dell EMC Advanced Analytics Specialist exam is designed for:
  • Aspiring Data Scientists, data analysts that have made the associate level Data Science and Big Data Analytics program, and
  • Computer scientists are needing to get MapReduce and techniques for analyzing unstructured data such as text.

Prerequisites

  • Completion of the Data Science and Big Data Analytics course.
  • Proficiency in at least one programming language such as Java or Python.
  • Dell EMC Advanced Analytics Specialist certification proves your ability to apply the techniques and tools wanted for Big Data Analytics.
  • It evaluates a candidate’s knowledge of concepts and principles applicable to any technology environment and industry, rather than on particular products.
Preparation for the E20-065 Dell EMC Advanced Analytics Specialist exam will enable students to:

You will learn to:

  • Become a significant contributor on a data science team.
  • Assist reframing a business challenge as an analytics challenge
  • Deploy structured lifecycle access to data analytics difficulties
  • Apply relevant analytic techniques and tools to investigate big data
  • Tell a compelling story with the data to make business action
  • Use open-source tools such as "R," Hadoop, and Postgres.
  • Generate and execute MapReduce functionality.
  • Gain experience with NoSQL databases and Hadoop Ecosystem tools for analyzing large-scale, unorganized data sets.
  • Develop a working understanding of Natural Language Processing, Social Network Analysis, and Data Visualization theories.
  • Use advanced quantitative methods, and apply one of them in a Hadoop environment.
  • Apply advanced techniques to real-world datasets in a final lab.
  • Prepare for EMC Proven Professional Data Scientist certification.

With the number of data that is being made and the growth in the field of Analytics, Data Science has turned out to be a need for companies. To make most out of their data, businesses from all areas, be it Finance, Marketing, Retail, IT, or Bank. All are looking for Data Scientists. This has led to a massive demand for Data Scientists all over the globe.

Benefits of Dell EMC Advanced Analytics Specialist Certification

Career Growth:

If you are looking for a way to jump-start your career, earning your data science certification is an important step to take. Even if you are already experienced in data science, a professional certification from an advanced data science program can still assist you in growing in your career, stand out amongst the competition, and even increase your earning potential.

Flexibility, Freedom, and Benefits:

If you are looking to get approved in a field where you will always have a lot of options and never be bored in your line of work, data science is the way to go. There are so many various industries leveraging the power of data science, from healthcare to back to retail and entertainment. Just about every sector and company these days is regarding the value of data and the requirement for qualified data scientists.

Structured Education Program:

When you prefer to learn on your personal, it will typically take a lot of preparation to do yourself to determine what is required to succeed as a data scientist.

It is also simple to miss valuable lessons that you would otherwise get with a structured education program, as you will likely only get bits and pieces of information for free sources.


A formal education program equips students with everything they want to master data science in a logical, organized manner. Because data science can be complicated, having this structure even if you already have some data science experience is required.

Keeps You Updated on the Latest Industry Trends

Enrolling in a data science program will enable you to stay on top of the latest trends in the domain. Learning new skills is fundamental when it goes to expanding your knowledge base.

If you have other things on your plate, such as a full-time job, it can be not very easy to learn these words from multiple sources. It is typically more efficient to enroll in a data science program with an accredited institution so that you can improve your learning experience. This can also give you an asset to your current employer and any future potential employers.

Shows Your Dedication

Companies recognize and understand the fact that enrolling in an education program and getting certified can be challenging. Not only do students have to study and work hard to succeed, but they often have other responsibilities to deal with, such as family life and a full-time job.

By choosing to enroll in a Dell EMC Advanced Analytics Specialist exam, it gives potential employers just how serious and intense you are when it comes to data science, and it demonstrates your level of dedication.

It also says more about the person’s character. This is someone who, even with a now jam-packed schedule, decided to better themselves and get certified.

These are all valuable qualities in any employee, but especially for someone who needs to work in data science.

The Bottom Line

If you are ready to stand out amongst the competition and make a big impression on potential employers, there is no better time than now to get your Dell EMC data science certification if you are serious about pursuing a career in data science.

Although it will not just advance credentials to communicate conclusions and recommendations and Visualization methods, it will also help the employer to assess your proficiency. We understand that the busy professional does not have time to study the entire syllabus, recommended by the Dell EMC. So for their accessibility, AnalyticsExam offers you updated actual questions.

Dell 2020 Networking & Solutions Technology Trends

Dell EMC Study Material, Dell EMC Guides, Dell EMC Learning, Dell EMC Prep

Since joining Dell as CTO for Networking & Solutions in June 2019, I have been energized by the opportunities and the extent of technology development at Dell Technologies, as well as the deep partner engagement in R&D.  Heading into 2020, our customers require distributed and automated infrastructure platforms that support a wide range of use cases from data center automation to edge and 5G enterprise verticals. Let’s take a closer, more technical look at what’s behind these trends.

Cloud-native software drives intelligent automation and fabrics in data centers


Advances in infrastructure automation are leading to full automation stacks incorporating OS configuration management, DevOps tools, and platform stack installers and managers. These bundles enable a new operational model based on fully-automated, zero-touch provisioning and deployment using remote tools for networking, compute and storage infrastructure. This has become a critical requirement for large deployments, delivering the ability to rapidly deploy and manage equipment with the least amount of operational cost at scale. This is a key enabler for edge use cases.


Network configuration and fault mitigation is rapidly becoming automated. Telemetry data availability and integration with orchestration applications allows the network to be more than one static domain. Using data analysis and fault detection, automatic network configuration and self-healing can become a great differentiating factor in selecting one solution over another.

The tools for infrastructure lifecycle management, including firmware upgrades, OS updates, capacity management and application support, are becoming an integral part of any infrastructure solution. These trends will accelerate with the help of AI software tools this year and continue to expand to every part of the infrastructure.

Dell EMC Study Material, Dell EMC Guides, Dell EMC Learning, Dell EMC Prep

Micro-services based NOS design fuels the next wave in Open Networking


Network operating systems (NOS) are evolving into flexible cloud-native microservices designs that address many of the limitations of traditional networking platforms. One of the biggest benefits is the ability to support different hardware platforms and customize the services and protocols for specific deployments. Gone are the days when the only option network operators had was to accept a monolithic, generic OS stack with many features that would never be used. This new architecture is critical for supporting edge platforms with constrained CPU and power with targeted networking missions.

Community-based NOS platforms such as SONiC (Software for Open Networking in the Cloud) have the added benefit of accelerating development through a community.  SONiC is gaining momentum as a NOS for both enterprises and service providers due to its disaggregated and modular design.  By selecting desired containers and services, SONiC can be deployed in many use cases and fit in platforms of many sizes.

The recent increased industry involvement and community creation has placed SONiC on an accelerated path to support more use cases and features. The increased development activity will continue through 2020 and beyond. SONiC has also grabbed the attention of other projects and organizations such as ONF and TIP/Disaggregated cell site gateways. These projects are looking into ways to integrate with SONiC in their existing and new solutions and driving a new set of open networking use cases.

Merchant silicon extends to cover more complex networking requirements


Programmable packet forwarding pipelines, deep buffers, high radix, high line speeds, and high forwarding capacity merchant silicon switches coupled to a new generation of open network operating systems are enabling effective large scale-out fabric-based architectures for data centers.  These capabilities will enhance both data center and edge infrastructure, replacing the need for a chassis design or edge routers with custom ASICs. In 2020, for the first time, we expect to see merchant silicon-based network solutions achieve parity with most of the traditional edge and core networking platforms, providing a scale out design that is better aligned to converged infrastructure and cloud requirements.

Programmable silicon/data plane enabling streaming analytics


Programmable data planes are maturing with P4 compilers (as the community approach) and many other available languages for creating customized data pipelines. There is also a growing number of NOSs that support programmable data plane functionality. These new software tools enable the creation of unique profiles to support specific services and use cases, including edge functionality, network slicing, real time telemetry and packet visibility.  These powerful new capabilities provide control and AI-based mitigation, as well as customized observability at large scale in real time. Developers have access to the data pipeline and will be able to create new services that are not possible in traditional networking. This is going to be one of the key new trends in 2020.

Storage fabrics using distributed NVMe-oF over TCP/IP solutions


NVMe has emerged as the most efficient and low-latency technology for storage access. NVME-over-Fabric (NVMe-oF) extends the protocol to work across networks using fabric-based networks (Fibre Channel, RoCE, TCP/IP). TCP/IP and RoCE have a clear cost effectiveness advantage with 100GbE being four times as fast as 32GbE FC at about 1/8th of the cost. Between those two protocols TCP/IP emerges as the solid choice due to similar performance, better interoperability and routing, and utilization of lossless networks only where needed. NVMe-oF/TCP transport provides the connectivity backbone to build efficient, flexible, and massive-scale distributed storage systems. The key to unlocking this potential is service-based automation and discovery controlling the storage access connectivity within the proven SAN operational approach and orchestration frameworks extended across multiple local storage networks through both storage services and fabric services federation.

Distributed edge emerging as a requirement for Industry vertical solutions


Emerging use cases at the far edge for analytics, surveillance, distributed applications and AI are driving the need for new infrastructure designs. Key constraints are the operating environment, physical location, and physical distribution giving rise to the need for a comprehensive remote automated operational model. New workload requirements are also driving the design. For example, Gartner predicts that “by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud*.” New innovations at the edge include converged compute and networking, programmable data plane processors, converged rack-level design, micro/mini data centers, edge storage and data streaming, distributed APIs and data processing.  We are at the start of new phase of development of custom solutions for specific enterprise verticals that will drive new innovations in infrastructure and automation stacks.

Dell EMC Study Material, Dell EMC Guides, Dell EMC Learning, Dell EMC Prep

Wireless first designs are driving new infrastructure platforms for enterprises and service providers


There is tremendous growth in wireless spectrum and technologies including 5G, 4G, shared spectrum (CBRS), private LTE, and WiFi, coupled with a new desire to transition to wireless as the preferred technology for LAN, campus, eetail, etc. This is driving the need for wireless platform disaggregation into cloud native applications for core, radio access network (RAN) and WiFi that support multiple wireless technologies on shared infrastructure. Disaggregation is starting at the core and moving to the edge levering edge compute with automation in a distributed model, which is bringing all the benefits of cloud economics, automation and developer access to wireless infrastructure and creating massive new efficiencies and new services.

Smart NICs are evolving to address massively distributed edge requirements


The new generation of powerful Smart NICs extend the model of simple NIC offload and acceleration by adding heavy data plane processing capacity, programmable hardware elements, and integrated switching capabilities. These elements allow many data flow and packet processing functions to live on the smart NIC, including networking, NVMe offload, security, advanced telemetry generation, advanced analytics, custom application assistance, and infrastructure automation. Smart NICs will be a key element in several valuable use cases: distributed network mesh, standalone intelligent infrastructure elements (e.g. radio controllers), autonomous infrastructure, distributed software defined storage, and distributed data processing.  Smart NICs will serve as micro-converged infrastructure extending the range of edge compute to new locations and services beyond edge compute.

The age of 400G – higher speeds driving new fundamental network switch architecture


Native 400G switches coupled with 400G 0ptical modules are now available and breaking the 100G speed limit for data center interconnects.  This is creating challenges with power and thermal, as well as space and layout, and moving the industry to co-packed optics.

In addition, new silicon photonics (ZR400 and others) enable long reach Dense Wavelength Division Multiplexing (DWDM) transport given the availability of merchant optics DSPs. This is going to fundamentally transform networking, data center interconnect and edge aggregation by collapsing the need for a stand-alone DWDM optical networks, therefore bringing great efficiencies, automation and software-defined capabilities to the entire networking stack.

Stay tuned—2020 is set to be a year packed with innovation as we strive to deliver customers the technology that will drive their businesses into the future.

Sunday 26 January 2020

Top 5 Reasons to Adopt PowerOne for SAP S/4HANA

Dell EMC Study Material, Dell EMC Tutorial and Material, Dell EMC Study Materials, Dell EMC Prep

As the head of our Business Applications Solutions engineering team for Dell EMC, I make it a point to talk with our customers about moving to SAP S/4HANA.

Many of our customers have migrated at least one of their SAP systems to SAP S/4HANA, but it’s usually limited to the set of system roles like sandbox, test, or dev. So, what’s keeping them from tackling mission-critical workloads that help run the business, like Finance, HR and Manufacturing?

Two things: complexity and risk. Every one of our customers has plans for modernizing their IT infrastructure, but they don’t always know how to get there. When it comes to mission-critical apps, most of our customers are also evaluating hybrid cloud options.

We recently announced Dell EMC PowerOne, a new autonomous infrastructure system designed to simplify and automate IT operations across Dell EMC compute, storage, and networking. This all-in-one system has a built-in automation engine designed to be the automation control plane that delivers VMware infrastructure and removes the burden of building, maintaining, and supporting the infrastructure. This will allow our customers to focus on their business.

In simple terms, PowerOne gets customers up and running faster, automating thousands of tasks (like deploying, expanding and repurposing resources) and doing it all through an integrated GUI or a single API. As I considered this level of infrastructure automation and its ability to truly manage the entire system as Infrastructure as Code, I realized that PowerOne provides an SAP-certified best-of-breed performance platform for SAP landscape consolidation. This platform helps you prepare and chart a clear migration path for SAP applications to HANA and to S4/HANA.

Here’s five examples of what you can achieve and why with PowerOne:
  • Modernize the core. Move from SAP ERP to SAP S/4HANA while upgrading to the PowerOne all-in-one system with automation to easily configure and deploy your infrastructure.
    • PowerOne delivers compute, networking, and storage in a converged infrastructure solution to reduce the time it takes to adopt a modern SAP infrastructure for both physical and virtual environments.
    • Businesses running SAP on industry-standard VMware vSphere can spin up VMware clusters in just a few clicks. Consider cloning a system and migrating it into a new cluster without taxing a specialist or going through your ticketing system.
    • With this in place, SAP Admins can more easily manage and expand the SAP infrastructure by using standardized API integration at the PowerOne system level.
  • Fast track your implementation with SAP HANA Tailored Datacenter Integration (TDI). All PowerOne components are SAP HANA certified, including Dell EMC PowerEdge servers, PowerSwitch networking, and PowerMax storage.
    • Dell EMC PowerEdge servers use the latest Intel processors to maximize application performance and SAP Application Performance Standard (SAPS) values, which are proven by the many “best-of” published SAP sales and distribution (SD) two-tier benchmark results.
    • The highly intelligent PowerMax array is designed to meet the storage capacity and performance requirements of an all-flash enterprise data center. It delivers advanced storage technologies, featuring many data services and provides mission critical availability with SRDF for your SAP productions systems. PowerMax offers great scalability and can support up to 162 HANA productions nodes. PowerMax service levels can protect production performance from other workloads, allowing SAP customers to separate applications and databases based on performance requirements and business criticality.
  • Lower TCO for SAP landscapes. PowerOne provides massive SAP landscape consolidation opportunities for all your SAP instances, both production and non-production, as well as other workloads. It delivers consistent performance at scale for highly virtualized SAP landscapes and effortlessly handles high random reads and writes. You can take advantage of the new flexible consumption options available with Dell Technologies On Demand.
  • Get a jump start with hybrid-cloud. PowerOne is available as a validated design for the Dell Technologies Cloud, so you can begin to simplify your cloud experience. In the future, it will be integrated with VMware Cloud Foundation.
  • Simplify and protect SAP landscapes. Make SAP maintenance easier by automating the management of SAP systems and their copies and refreshes using Dell EMC’s integration software, Enterprise Storage Integrator, with SAP’s Management GUI, SAP Landscape Management, ESI for SAP LaMa. You can automate the copy and refresh process and create backups faster with smaller data footprints for large SAP landscapes.
When managing the move to SAP S/4HANA, IT organizations want the best time to value available on the market. Adopting PowerOne modernizes the core and sets you up for a future transition to the cloud.

Saturday 25 January 2020

Data Protection Evolution in the Coming Decade – Part 3

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Prep

In Part 1 and Part 2 of this blog we looked at the four major technology trends impacting the IT industry. Now let’s discuss what the attributes of future data protection will look like in the coming years.

1) Data protection will be cloud native


Data protection will live in an IT environment that spans on-premises, private and multiple public clouds, and will have to be native to such environments. It will operate wherever the workloads and data reside and at the granularity of the new entities that will be used: containers, functions, micro-services, etc. It will be designed as a cloud-native application and delivered as SaaS to enable seamless scalability and portability that enables protection policies to follow workloads wherever they reside across multi-cloud environments.

Dell EMC Study Materials, Dell EMC Certifications, Dell EMC Learning, Dell EMC Prep

2) Business service protection – beyond data protection


As IT environments evolve, protecting just the data is insufficient. For example, we used to protect the data/log files of a database or application, and during a recovery event, present these files to a DBA who was then responsible for bringing the database back online. In dynamic cloud-native environments or in distributed edge configurations, there is no DBA to manage the recovery process.  Instead, the data protection service will need to deliver the automation required to not only restore the data, but also configure and set up the environment (platform, compute, services, networking) to fully automate the recovery process.  This level of automation requires the data protection service to protect all the entities that constitute the business service and fully orchestrate the recovery to ensure the lowest RTOs possible.

3) Autonomous Protection & Recovery -> Resilience


As users get used to fully-automated technology services and devices, the business service protection will also evolve to become automatic and eventually autonomous. It will automatically discover all the entities in the IT environment associated with the business service, such as containers, databases, files and file systems, objects, etc. It will then leverage AI/ML algorithms to assign the appropriate protection policy to each of them.

Once protection becomes autonomous, recovery flows will follow. Continuous intelligent health monitoring will detect failures and trigger autonomous recovery to a previous known state to resume the business service. It will even be able to predict some failures and use preemptive measures to avoid service disruptions. It’s an inevitable change since the growing complexity and dynamic nature of the environment will not allow for manual control. Data protection will morph into Business Service Resilience (BSR).

4) Data management and security


The final pillar of future data protection solutions would leverage the data for:

1. Data management – how we manage repositories of data (files, objects, data-bases etc.) using their meta-data attributes, without analyzing their content. This level of data management is available at various forms in most storage and data protection solutions.

2. Data Analytics and Content Analysis – this advanced level looks inside data repositories to understand its context and what action should be taken. It ranges from basic Search to more advanced content analysis using Natural Language Understanding to provide insights and optimizations, e.g., by identifying document sensitivity through its content, thus determining the appropriate protection policy.

3. Security –security will become integral to data protection as their synergies have proven to be effective in responding, or even preventing cyber-attacks. Two examples are the way backup services are used for recovery from ransomware attacks, and how isolated air-gapped systems can be used to recover from cyber-attacks.

These four pillars will be gradually introduced in the coming years. In the final installment of this blog series, the CTO of Dell Technologies Data Protection Division, Arthur Lent, will discuss how Dell Technologies is innovating for the future to deliver on this vision.

Thursday 23 January 2020

Technical Disruptions Emerging in 2020

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials

This year a broad range of emerging technologies will become a tangible part of the broader IT and business dialogue. Here we’ll take a look at long-term disruptions that will be real enough to matter in thinking through the future but possibly not real enough yet to change the market immediately. What all of these share are the potential to dramatically change IT system and industry thinking as well as the world’s technical capability.

◉ The “vacuum tube” era of Quantum computing begins. While we are still many years away from a viable quantum computer, 2020 is the year we will see the first real quantum technology applied to solve small problems in a radically new way. Long- term we see three big conditions that must be true before we have viable impactful quantum technology:

1. A viable quantum computing architecture needs to be built. Today we have 25 or 53 qbit systems that are nowhere near the scale needed to run advanced algorithms or solve the theoretical problems quantum may address. We also do not have consensus on what a quantum computer is as different teams propose different models including trapped Ion, trapped photon, etc.

2. Quantum systems must be practical in real-world environments. Today the early systems are exotic to the extreme. Many need supercooled cryogenic systems to exist and they are incredibly fragile. We need quantum qbit capacity to be delivered much as we deliver compute capacity – via standardized chip level building blocks.

3. We need a standardized way to interact via software with quantum. Today there isn’t standard consensus API or even a broad agreement on how quantum sits in the rest of the IT stack. There is a shift to agree that it will not replace traditional compute but look much like an accelerator (GPU, FPGA,  SmartNIC, etc), however, the actual way that happens is still not a reality.

Look for lots of quantum announcements and “breakthroughs” in these areas in 2020 but we predict we will still be in the vacuum tube era when 2020 ends.

◉ Domain Specific Architectures in Compute will become a reality. We have lived in a world of homogeneous compute for many years. X86 is the compute architecture powering the cloud era and most modern IT. While x86 is still critical to run broad general software, as we enter the AI/Machine Learning era, we need far greater compute capacity per watt and, with Moore’s Law winding down, we need alternative models. The one that seems to be the winner is X86 augmented by domain specific architectures to accelerate specific kinds of software and functions. We already have this for features such as encryption but in 2020 we will see a massive expansion of the available chipsets that accelerate specific domains. Some examples will be a next wave of SmartNICs to offload and accelerate not just networking but higher level functions in the communications stream, general purpose AI/ML chips that are optimized for 4 or 8 bit precision and only accelerate AI/ML tasks, chips that emulate neural networks in silicon, low power AI inferencing chips for the edge, and many more.

2020 will be the first year that we have a wide range of domain specific architectures and that will cause us to change system architecture to accommodate them. We will need dense acceleration servicers (Like Dell DSS8440 or 940XA); we will need an ecosystem approach to these accelerators to pre-integrate them into solutions and make consumption easy; and we will need to virtualize and pool them (VMWare Bitfusion as an example) and create APIs to interact with them such as OpenCL and CUDA. By the end of 2020, we predict most enterprises will begin the process of shifting to a heterogeneous compute model built with X86 plus domain specific architectures.

◉ 5G will change how we think about wireless network capabilities. Early 5G roll outs are happening now but in 2020 we will start to see the full potential of 5G. It clearly will give us higher bandwidth and lower latency than 4G and that’s good but what will change is that we will begin to think about how we use the new capabilities of 5G. New is the ability to program 5G to deliver network slices for specific enterprise applications and users to create one end to end experience via cloud orchestration (we showed this in 2019 at Mobile World Congress).

Addictingly, 5G is not one size fits all wireless. Beyond the first use case of Mobile Broad Band (mBB), 5G will add two entirely different capabilities in the wireless systems. First, is Ultra Reliable Low Latency Communication (URLLC) which will make real-time systems like drones and AR more effective. Second, 5G will add massive machine type communications (mMTC) that will optimize 5G for the world of billions of low power lightly connected sensors. However, the biggest new capability 5G will expose is an edge compute model that enterprises will begin to look at to deploy their real-time and data-intensive applications into the 5G network close to the users to get a faster response time for tasks like AI-driven control systems in factories or cars but also will push pre-processing of data to that edge to control the data flow back into data centers and clouds. By the end of 2020, we predict customers will begin to fully understand the significant capability change of 5G and start developing ways to take advantage of it to digitize their businesses.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Learning, Dell EMC Tutorial and Materials
There are many other emerging technologies that will show up in 2020 but these three – 5G, Domain Specific Architectures and Quantum represent the ones that are likely to change the trajectory of the industry over the long term. While Quantum will not do that for many years, what’s exciting about 2020 is that all three of them are becoming real enough to now enter the technical and business dialog for the first time broadly.

Dell Technologies is working in all of these areas as we see a data explosion coming that will require orders of magnitude more compute, storage, networking and applications capacity to keep up. We are, understandably, excited that 2020 will be a year where we not only continue to move existing technologies forward but also a year where many potential game-changing technologies become real enough to be part of the strategies our customers are developing to win in the digital transformed world.

Wednesday 22 January 2020

The Time to Modernize Your Edge is Now

If you’ve been following Dell EMC Networking at all over the past year or so, you’ve likely heard a bit about our collaboration with VMware. In 2019, we really took this collaboration to the next level. Or perhaps you could say we took it to the edge, with the Dell EMC SD-WAN Solution powered by VMware.

This solution combines purpose-built networking appliances from Dell EMC with SD-WAN software from VMware, a dedicated Dell EMC orchestrator, and VMware Virtual Cloud Gateways—all factory-integrated for rapid deployment, seamless operations and complete SD-WAN transformation—along with a selection of flexible support options to match your business needs.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certification, Dell EMC Online Exam, Dell EMC Prep

The benefits are transformational, enabling you to adapt to change, improve availability and gain better performance for your modern business applications, all while saving time and enjoying up to a 75% reduction in costs over traditional MPLS-backed WAN designs1. I could go on and on about the benefits this solution offers, but we have some great resources available to show you how Dell EMC and VMware are—together—revolutionizing the edge, and why you need SD-WAN.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certification, Dell EMC Online Exam, Dell EMC Prep

What I’d like to call attention to is VMware, not that VMware needs my help—in a relatively short period of time, VMware has firmly established itself as a leader in transforming businesses across compute, cloud, networking and security and digital workspace. Everybody knows the strength of VMware’s ability to innovate and disrupt. Most importantly, customers of all sizes know the value VMware adds to their businesses, regardless of size and scale.

But did you know that VMware has been named a Leader by Gartner in its 2019 Magic Quadrant for WAN Edge Infrastructure for the second consecutive year? Among 19 vendors in the space, VMware was positioned highest for “Ability to Execute” and furthest for “Completeness of Vision2.”

VMware SD-WAN™ by VeloCloud® enables bandwidth expansion, provides direct optimal access to cloud-based applications, and enables virtual services integration in cloud and on-premises while dramatically improving operational automation. This is a proven robust platform that employs a horizontally scalable architecture supporting an unlimited number of customers, sites, users, technologies and applications.

VMware’s recognition as a Leader by Gartner makes us at Dell EMC Networking very fortunate to combine this innovative software with our platform expertise and trusted support to help you modernize your edge infrastructure with low risk and high value. And given Gartner’s prediction that “by 2022, as a result of digital business projects, 75% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud3,” we think the time to modernize is now.

The Dell EMC SD-WAN Solution powered by VMware is carefully engineered to take full advantage of our combined strengths, backed by Dell EMC’s industry-leading service, support and supply chain, enabling you to modernize your edge infrastructure, and in turn, bring your entire business to the next level. This is the Better Together promise that only Dell Technologies can deliver.

Congratulations to my friends at VMware for being recognized by Gartner. But the big win here is for customers. We’re building a portfolio of solutions in partnership with VMware that accelerate your transformation, and we’re not slowing down in 2020.

Tuesday 21 January 2020

OpenManage Mobile Brings the Awesome Power of Augmented Reality to PowerEdge MX

The IT Manager challenge­­


You are an IT Manager. Perhaps, you manage hundreds of servers and dozens of modular infrastructure systems. You want to deploy infrastructure devices the moment you receive them, keep firmware and drivers updated, monitor them regularly for uptime and efficiency, and retire them securely.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Online Exam, Dell EMC Exam Prep

Sounds difficult? Not if you have the right tools!

Dell EMC steps in


Fortunately, the Dell EMC OpenManage portfolio provides you with all the tools you need to do this efficiently and securely. Whether you prefer to manage your infrastructure devices 1:1, through a console, or simply through your current tools (VMware, Microsoft or Ansible consoles), Dell EMC offers you multiple options that make it simple to manage your IT infrastructure through their entire lifecycle.

But how do I manage my IT if I am always on the road?


Managing your IT infrastructure when you are in the data center is all fine and good – but what if you are away from the data center and a device needs your attention? OpenManage Mobile to the rescue – you can monitor and manage your infrastructure devices from a mobile iOS or Android device anytime, anywhere. You can also receive proactive notifications and take appropriate action as needed.

Can you make it even simpler?


When you are in the data center, OpenManage Mobile can communicate directly with a PowerEdge server or an MX chassis with Quick Sync 2, an optional Bluetooth module embedded in the server or MX chassis.

It is simple to use. Run the OpenManage Mobile app on your mobile device and tap the Quick Sync button on the hardware to connect with the app via Bluetooth. You can now access information about your server or MX from the app. You can browse the screens to view inventory, health status, or even do the basic configuration. You want to change the IP address – sure! You want to change the password or key BIOS settings – no problem!

With an MX7000 chassis, OMM utilizes Augmented Reality to make monitoring even simpler. Simply view the chassis through the camera in OMM app. You will see the chassis image with health overlays on top of individual components. See how the technology works in this video:


In summary, you can follow these three easy steps to monitor your modular infrastructure systems using your mobile device. In the OpenManage Mobile app with Quick Sync activated, focus your camera on an MX chassis.

Step 1: Identifies the MX chassis

Step 2: Puts health overlays on top of every component on the MX image

Step 3: You can click on any health overlay and get more details.

But how does all this work?


OpenManage Mobile uses Augmented Reality (AR) to provide an overview of health updates by looking at the MX chassis through the mobile device camera.

How does OpenManage Mobile use AR Technology?

By calculating the plane of the chassis’ front face and identifying key shapes that make up important components, such as the fan or power button, OpenManage Mobile creates a 3D boundary of where the chassis exists in the real world and overlays a blueprint of all the chassis parts and components on top of the boundary.

Dell EMC Study Materials, Dell EMC Learning, Dell EMC Online Exam, Dell EMC Exam Prep

How does Quick Sync feed into AR?

In order to draw meaningful data onto the detected chassis, OpenManage Mobile reads details about the chassis health status from the Quick Sync 2 module on the chassis. Quick Sync 2 utilizes Bluetooth Low Energy to wirelessly host its health and component data, allowing OpenManage Mobile to connect and read the server. All Quick Sync 2 communications are encrypted with a secure handshake like what is available in HTTPS connections. If OpenManage Mobile has connected to the chassis at least once before working with augmented reality, the broadcast address and chassis certificate will quickly and automatically match and validate the Quick Sync 2 connection.

What else can OpenManage Mobile do for you?


A lot.

Get Push Notifications – proactively – from OpenManage Enterprise console

◉ Receive alert notifications from OpenManage Enterprise on your mobile device

◉ Acknowledge, forward and delete alerts from your mobile device

Monitor and Manage PowerEdge servers or MX chassis

◉ Browse server details, health status, firmware inventory, system event logs, and LC logs of individual servers. Share/Forward as needed.

◉ Use your tablet as a crash cart to access the system console

◉ Access and share SupportAssist reports, or crash screens and videos

◉ Access server warranty information

◉ Access system console through VNC to view server OS desktops with iDRAC VNC enabled servers. (This requires 3rd Party VNC client app available for Android and iOS devices.)

◉ Perform server management functions such as Power On, Power cycle, Reboot, or Shutdown

Configure and Provision PowerEdge servers and MX modular infrastructure


◉ Configure one server manually, or multiple servers simultaneously. You can even update the Auto-Update flag in the server from OpenManage Mobile. Same is applicable for compute sleds in MX7000 chassis

◉ Provision PowerEdge servers or MX infrastructure: plug the power cable, connect the mobile device to server or chassis, assign an IP address, change credentials, and update BIOS attributes

◉ Run RACADM commands and get output directly on the mobile device

Saturday 18 January 2020

Data Protection Evolution in the Coming Decade – Part 2

In the first part of this blog series we reviewed how unrelenting data growth, the increasing value of data and the transformation of application services are introducing increased complexity and risk into the data protection process.

In part two, we will examine how the increased distribution of data along with the maturation of artificial intelligence and machine learning technologies is adding further complexity to how critical data should be protected across Edge, Core and Multi-Cloud environments.

Trend 3: Distributed Data


As the Internet of Things (IoT) drives intelligence deeper into the edge of the network,  growth in data and the IT infrastructure itself will not be confined to just the four walls of the data center and the public Cloud. From autonomous vehicles and smart cities to automation on factory floors, data is being created at every conceivable corner around the globe. This data is stored and analyzed locally without being uploaded to a central data center or to the Cloud.

This distribution of compute, storage and code changes the game not only for the applications but also for data protection systems that can no longer rely on a centralized control server to manage protection for all the entities that make up an application service. If data is captured and analyzed at the edge, its importance is being determined there and therefore the level of protection required for it also needs to be applied there. Moreover, data is becoming ephemeral and predefined data protection policies may not apply anymore.

One example of this would be the video feed from a connected car. Typically, this data is deleted after a short period of time, however, if there is an accident or the car is stolen, the value of the video dramatically increases. In this instance, the video should be immediately protected and replicated to the core data center. A traditional, centralized data protection control plane cannot manage this type of a distributed environment that could potentially constitute many thousands of individual endpoints. Consequently, data protection implementation methods need to change.

Dell EMC Cloud, Data Center, Data Protection, Opinions, Dell EMC Study Materials, Dell EMC Tutorial and Materials

Trend 4: Artificial Intelligence and Machine Learning


The fourth trend is the growth in artificial intelligence and machine learning (AI/ML) technologies. For centuries machines worked for humans, but now we have entered an era where intelligent machines work alongside humans.

Humans, in many cases, are guided by machines and interact with them similarly to how they engage with other humans. Digital assistants, navigation systems, auto-pilots and autonomous cars are just a few examples. Such new models of interaction between humans and machines will become the norm not only in our daily life but also in how we operate IT systems. Users who grew up talking to Siri or Alexa will be the next generation of application developers and IT system administrators. The solutions deployed in the Cloud, data centers or Edge will need to evolve to enable user interaction using natural language and to automate most of the mundane work, leaving the users/administrators to perform high-level guidance and exception handling.

Thursday 16 January 2020

What separates AI-Enabled Data Capital Achievers?

I recently joined the Storage Solutions team at Dell Technologies, which is tasked with marketing for High Value Workloads and Vertical Solutions across our storage portfolio. Our team handles everything from SAP, SQL and Oracle to Healthcare, Life Sciences, Media & Entertainment, Video Surveillance and Advanced driver-assistance systems (ADAS) – just to highlight a few. All these established solutions areas have important use cases and workloads that drive significant amounts of storage capacity and revenue.

My area of focus is somewhat different and unique on the team: I cover AI/ML/DL as a discrete solution area but also as a horizontal enabling technology ingredient across all vertical solutions. One of my key responsibilities is to showcase how our storage portfolio enables the design and deployment of optimal IT environments for AI initiatives. This is an emerging area, quite dynamic and diverse, made all the more important as our customers look to gain competitive insights and revenue advantages in this Data Decade.

We’ve all heard that “data is the new oil” and therefore one of the most important assets for organizations around the world. But the advantages and importance of this Data Capital are only realized if value can be extracted successfully. In order to effectively harness the power of data, the very backbone of your IT infrastructure needs to be optimized to handle the work. This can be achieved in many ways; however, one thing is certain and urgent – the data keeps coming and it won’t stop. We’re living in a time of unprecedented growth of unstructured data. Organizations are swamped and they need new ways to address this data deluge.

So how do you effectively and efficiently extract the value from data? What are the key ingredients of a modern IT Infrastructure for AI? What’s holding you back? Both Dell EMC and Intel, as IT industry leaders, are tasked with helping our customers answer these important questions as we collectively embark on our journey toward a digital future. To this end, we recently commissioned Enterprise Strategy Group to survey organizations around the world to determine their state of AI readiness. The results are published in this new Research Insights Paper: How Organizations Unlock Their Data Capital with Artificial Intelligence. The analysts were determined to discover what sets organizations apart with respect to their use of AI technologies and the best IT Infrastructure to power these demanding workflows.

A key tenet of the report is that legacy data analytics techniques simply will not work today and that forward-thinking organizations must implement AI into the mix in order to remain competitive. AI has significant advantages compared to traditional analytics, offering speed, scale, impartiality, precision and uptime. The report goes into detail on these facets, as well as the positive revenue and competitive impacts organizations can expect by introducing AI deployed on a modern IT infrastructure, summarized in the graphic below.


Although the results vary across the study’s respondents, it’s clear that no matter where they are in the evolutionary process of applying AI to their data analytics initiatives, the effort will drive significant amounts of incremental revenue.

How does your organization do the same? This highly informative ESG report highlights four key characteristics of Data Capital achievers:

1. Deployment of high performance and scalable storage.

2. Use of hardware accelerated massively parallel compute for optimal AI model training.

3. Adoption of comprehensive protection for the data pipeline.

4. Enlistment of third parties to augment data science initiatives.

It’s still early days in the use of AI to power data analytics initiatives. Even though, it’s clear from this report that organizations starting now to build out AI enabled modern IT Infrastructures are the ones that stand to gain the most from their nascent investments.

Are you interested in learning more? In addition to this ESG report, we’ve recently commissioned a couple other new research papers in this area. Please check out this report from Moor Insights & Strategy as well as this IDC Technology Spotlight.

Tuesday 14 January 2020

What Is Hardware Root of Trust?

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Online Exam

As part of the PowerEdge server team, we use the words Root of Trust frequently. It’s such an important concept rooted in the foundational security and protection of each PowerEdge server. And, it is a key component in our Cyber Resilient Architecture. But, do you understand what it means and how it works? I didn’t. So, I sought out experts here at Dell and researched it online. Here’s what I learned and how I would explain it to my friends who aren’t engineers.

What is Root of Trust?


Root of Trust is a concept that starts a chain of trust needed to ensure computers boot with legitimate code. If the first piece of code executed has been verified as legitimate, those credentials are trusted by the execution of each subsequent piece of code. If you are saying “Huh?” then let me describe the process using a physical-world scenario. Stay with me – it will be much easier to understand in a paragraph or two.

When you travel by plane in the United States, the first layer of security is the TSA checkpoint. Think of this as your Root of Trust. Once you get past TSA, the gate agent just needs your boarding pass because they trust that you have already been checked, scanned, and verified by TSA. And because you got onto the plane, the pilot and the flight attendants trust that the gate agent validated that you are supposed to be on the flight. This eliminates the need for the gate agent, pilots, or anyone else to check you out again. You are trusted because the TSA validated that you are trustworthy. They scanned your belongings to ensure that you aren’t carrying anything harmful. Then, the gate agent validated that you have a ticket. At the airport, there is a physical chain of trust.

Almost an identical process happens when a computer boots (or powers up). Before the first bit of code is run (BIOS), the code is checked by the virtual equivalent of the TSA (the chip) to ensure that it’s legitimate. The checks happen similarly to the TSA agent checking your passport to ensure you are who you say you are, and your credentials haven’t been forged or tampered with. Once the BIOS is validated, its code is run. Then, when it’s time for the OS code to run, it trusts the BIOS. Thus, a chain of trust.



How we ensure Root of Trust is trustworthy


If an attacker could replace the server’s BIOS with a corrupted version of the BIOS, they would have vast access, control, and visibility into almost everything happening on the server. This scenario would pose a massive threat. This type of compromise would be difficult to detect as the OS would trust that the system checked the BIOS. So, it’s important that the authenticity of the BIOS is fully verified before it is executed. The server has the responsibility to check the credentials of the BIOS to ensure it’s legitimate. How does this happen?

Let’s go back to the airport and continue the analogy. A hijacker may try to impersonate a legitimate person by using their passport. Or, the more sophisticated attackers may try to use a fake passport. The TSA has backend systems in place that help prevent this from happening. Plus, the TSA agents are well-trained and can spot tampering, fakes, and misuse of all types of identification.

On a server, the chip (silicon) acts to validate that the BIOS is legitimate by checking its passport (encrypted signature). This encrypted signature (a Dell EMC encryption key) is burned into silicon during the manufacturing process and cannot be changed – it’s immutable. This is the only way to make Root of Trust truly immutable – do it in hardware. We burn read-only encryption keys into PowerEdge servers at the factory. These keys cannot be changed or erased. When the server powers on, the hardware chip verifies the BIOS code is legitimate (from Dell EMC) using the immutable key burned into silicon in the factory.

Serious protection that’s built-in, not bolted on


Our servers are designed so that unauthorized BIOS and firmware code is not run. So, if the code is somehow replaced with malware, the server won’t run it. A failure to verify that the BIOS is legitimate results in a shutdown of the server and user notification in the log. The BIOS recovery process can then be initiated by the user. All new PowerEdge servers use an immutable, silicon-based Root of Trust to attest to the integrity of the code running. If the Root of Trust is validated successfully, the rest of the BIOS modules are validated by using a chain of trust procedure until control is handed off to the OS or hypervisor.

The Value of a Secure Server Infrastructure is a researched-based paper from IDC that expands on the topic of hardware security. And when you are ready for a more technical explanation of security, this white paper on the Cyber Resilient Security in PowerEdge servers is the perfect reference.

Sunday 12 January 2020

Rack or Tower? How to select your next small business server

What server should you buy for your small business? Find out what two things you must consider when buying a server for businesses with fewer than 100 employees.

Should you buy a rack or tower server? The answer may not be as simple as it used to be. Fifteen years ago, it was a foregone conclusion that small businesses bought towers and large enterprises bought rack servers. That’s how we built them. That’s how you bought them. But it’s not that clear now.

Our latest tower servers can do things only rack servers used to be able to do. And we sell rack servers to small businesses. So, how do you decide? The answer will likely hinge on two factors.

1. Where will you physically put the server?


If the server is going to be installed in a data center, then 98% of you are going to need a rack server. The other  2% will buy a rackable tower server, such as the T640 or T440. But since most small businesses don’t own their own data center, colocation (or renting space in someone else’s data center) is the more likely data-center-placement scenario. A rack server is still the answer for a colocation.

If you are going to install it under someone’s desk or in the corner of an office next to a plant, then all of you are going to want a tower server. Why? In general, tower servers are quieter than rack servers. Tower servers traditionally have more space for air while rack servers are usually space-constrained. Less air flow usually means more fans running at faster speeds, which translates into more noise.

In some environments, noise can be a huge distraction to people working. Recording studios are measured at about 20 dBA. Quiet offices are at about 35 dBA. Data centers and vacuum cleaners measure in at 75 dBA. The noise difference between a rack server and a tower server running in a quiet office would likely make it unbearable for people in that environment to concentrate or talk with co-workers and employees. For instance, the T340 is a 1-socket tower server that would likely put out 23 dBA while idle (running OS only) and up to 30 dBA while operating at peak. A similar rack server, the R340, would likely put out 38 dBA all the time.

Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning, Dell EMC Online Exam

Fig 1. Acoustical reference points and output comparisons

If the server will not live in a data center or a place where people are, the decision becomes harder. Locations like a server room, wiring closet, or a coat closet are all places where servers could live. If this is where yours will operate, then you must evaluate consideration number two.

2. What is the temperature where you want to physically put the server?


Servers run optimally when the temperature of the air is within its normal operating range. Rack servers require physical racks, known as cabinets or server racks, that allow you to mount servers. And when you mount a bunch of rack-installed devices like servers, storage arrays, networking switches, and uninterrupted power supplies (UPS), the temperature of the air around them rises. This makes temperature control a requirement for continuous operation. In data centers, conditioned air flows up from a raised floor to keep the temperature at optimal levels.

Small businesses generally don’t have access to raised floors and chillers. However, some server rooms may be air conditioned. If the server is going to be installed in a temperature-controlled environment, a rack server will likely be the best option.

If the server will be installed in a former coat closet or other tight space, it’s likely that the temperature of the air will increase (sometimes substantially). If the internal components of a server get too hot, the risk of failure goes up. The same thing happens when it gets too cold. That’s why servers are designed to shut down when the temperature exceeds their standard operating range. It’s a protection mechanism. In general, today’s servers have a range of 50°F – 95°F (10°C – 35°C) with no direct sunlight on the equipment.

Dell EMC Tutorial and Material, Dell EMC Certification, Dell EMC Learning, Dell EMC Online Exam
Did you know? Although power and cooling technology has changed a lot in 15 years, racks and towers operate in the same temperature ranges.

Many of the modern servers available today also have an extended operating temperature range. Dell EMC PowerEdge servers can continuously operate even if temperatures get as cold as 41°F (5°C) or as hot as 104°F (40°C). And if there is a temperature spike to 113°F (45°C) for a couple of hours a year, the server can handle it.

Fans and heat sinks help to move hot air away from these components and out the back of the server. But if the server is in a coat closet without a vent, the closet will get warm and the air surrounding the components will be warmer. And if the server components get warm, the server fans will speed up and make noise. I’ve personally cracked the door on a converted closet. I’ve also heard of people installing a vent or replacing the door with a felt screen. In both cases, a tower server is the better option. Why? When you open the closet door, the noise is no longer confined. Listening to the sound of a rack server’s fan can be distracting, if not annoying, for office workers and especially customers.

Here’s a simple decision matrix a small business can use to help them decide on a rack or tower server. Most decisions will come down to noise first and temperature second.

Where will the server be placed? Best server form-factor 
Data center or Collocated space   Rack
Server room   Likely a Rack 
Wiring closet   Rack or Tower 
Coat closet   Likely a Tower 
On, under, or near a desk   Tower 
In the corner of the office   Tower 

Want more? Check out the 5 things to consider when buying your first server. These are great tips to help get you started on your server selection if you are unsure where to begin.