The Future Starts Here

first_imgData is growing at an exponential rate. The traditional siloed methods of managing and delivering storage cannot sustain this explosive data growth, as the associated operational costs will continue to rise at prohibitive rates. At the same time, public clouds have shown that by standardizing and virtualizing their infrastructure, it is possible to simplify, automate, and achieve extreme operational economics. While public clouds do solve some customer pain points, they introduce new constraints, such as requiring customers to move their data off-premises and forcing them to use “one-size fits all” for all their data storage needs.Given the transformations occurring in our industry, we asked enterprise customers what they wanted from their storage solutions, and a common set of concerns quickly came to the forefront. Customers want to lower their operational costs — especially in the presence of massive data growth — and they want to run their complex environments efficiently, without giving up their choice of IT infrastructure, the one that they believe best meets their business needs.Enterprise customers are also clear that they want to attain the economics of public cloud infrastructure without public cloud constraints. As a result of these customer conversations, the need for EMC ViPR became increasingly obvious.As the leader in the storage industry, we realized that we had to reexamine the way we manage and deliver storage in multi-vendor environments that consist of commodity and specialized storage arrays with both traditional and new interfaces. There are important lessons to be learned from the public cloud. I experienced them first-hand while building and running Microsoft Azure. To solve the hard storage problems and to remove the imposed public cloud constraints, we need to push the boundaries of innovation. Only then can we address the immediate problems that our customers are facing today, while simultaneously laying a foundation that will provide customers a path to a future with no compromises.ViPR’s design brings together what we heard from our customers and the lessons learned from the public cloud. EMC ViPR is our solution to what customers requested and — even more importantly — it lays the groundwork for the future. Being built as an open platform, ViPR allows others to innovate and enables us to work together as a community to fundamentally redefine how storage should be managed and delivered. As part of this community, we invite you to begin your software-defined storage journey by creating your own personal experience at ViPR Central.last_img read more

Prescribing Cloud

first_imgHealth IT executives attending HIMSS15 are working on the frontlines to realize the promise of accountable care.  We’re excited for the opportunity to come together to share new ideas and lessons learned.In the end, the ultimate driver in healthcare is outcomes.  Hybrid cloud improves IT outcomes by driving down costs.  Once more cost is taken out of infrastructure, it can be reinvested in innovation. That, in turn, improves patient care outcomes.And that sounds like a good plan. It’s good to have a plan.Healthcare data is growing faster than ever before.  At 48 percent each year, it’s one of the fastest growing segments in the Digital Universe.  This data is coming from many sources – clinical applications, compliance requirements, population health, and FutureCare-enabling technologies for cloud, Big Data, mobile, and social – just to name a few.Health IT needs a plan to manage and take advantage of all this information.  More than ever before, a hybrid cloud model needs to be part of that plan.On the Road to CloudAccording to a recent MeriTalk and EMC report, in 2015, 62 percent of health IT leaders are increasing cloud budgets to provide more coordinated, cost-effective care. should they focus their IT budget?  A hybrid cloud lets you place the right applications in the right cloud with the right cost and performance.  And, it lets you protect and secure protected health information (PHI).  The goal – eliminate data silos to gain a 360 degree view of the patient, in real-time, at the point of care.A hybrid cloud consolidates and eliminates inefficient silos.  Healthcare providers can balance clinical and business workloads in an enterprise hybrid cloud which incorporates private, managed private, and trusted, public clouds.As Health IT starts this journey,  other organizational objectives can get jump started.  For example, Health IT is then better equipped to deploy a data lake for clinical collaboration, and an agile data and analytics platform for storage, management, and analysis, by bringing together data from different sources across multiple protocols.As a result, you have the opportunity to deploy clinical predictive analytics for managing population health, reducing readmissions, and optimizing patient treatment plans.And, with just 18 percent of providers running EMR applications partially or fully in a hybrid cloud today, opportunity lies ahead.  To take advantage, Health IT organizations can begin with a converged infrastructure, which provides a fast track to an enterprise hybrid cloud computing by combining compute, network, and storage resources.last_img read more

Artist Takes Wallpaper from Computer to Your Wall

first_imgI obviously spend a lot of time in the world of technology. So when I hear the word wallpaper, my first thought is the background image on my laptop or phone. But the more tactile form of wallpaper – that goes on an actual wall – is making “a fashionable comeback.”And one of Dell’s entrepreneur customers, the interdisciplinary artist Kathryn Zaremba, is poised to take advantage of this trend and, in fact, may be one of the catalysts for the renewal in wallpaper popularity.Hers is not the mirrored and flocked wallpaper of my 1970s childhood, however. Zaremba’s designs include everything from swans and avocados to Matisse-inspired abstracts.“Wallpaper seemed like a melding of my life experiences, I want to make things that enliven a space, that provide creative energy and inspiration for its inhabitants,” Zaremba told Urban Outfitters. “An idea for a pattern can really pop up out of nowhere or sometimes it’s in a museum looking at works of art or it might come out of playful experimentation in my studio.”She spent time on theater sets and sound stages during her first career (yes, while many her age are still figuring out their first, Zaremba is onto her second career). Zaremba experienced coast-to-coast art galleries, stage sets and science museums, which became learning labs for her future illustrations and designs.Zaremba stepped away from stage and screen to attend the Kansas City Art Institute in Missouri and went on to graduate school at The Corcoran College of Art + Design in Washington, D.C. It’s also where she founded her wallpaper business and co-founded The Lemon Collective, a workshop space in Washington D.C. that is focused “mostly on making and makers” The Washington Post reports.Starting these endeavors after college, however, she did not have access to the same high tech equipment she did while in school. She connected with a Dell Small Business Technology Advisor who had tips on what tech would be good for her business – including a Dell Precision All-in-One.“.. I’m pretty obsessed with it,” Zaremba says of her Dell Precision All-in-One. In the video below, she explains how she draws and cuts out shapes then digitizes them for her wallpaper. “I manipulate the shapes in Adobe Photoshop and Adobe Illustrator to make wallpaper that is uniquely me for clients.” she needs to meet with those clients in their own space, so her Dell XPS 13 2-in-1 (below) allows her to take a portable studio with her. She notes that it is strong enough to power the creation of high-resolution illustrations, and “makes it easy to show [clients] vibrant details, samples and renders of how the wallpaper will come to life in their space.”She might need to add some video conferencing technology to her setup next because people around the world are taking notice of Zaremba’s wallpaper. Her Swansy Noir and Muse Variations designs were recently featured in Refinary 29 UK’s list of “Best Removable Wallpaper for Your Rented Flat.”Our Small Business Technology Advisors are available and ready to help with your tech questions so you can focus on running your business. From selecting the right systems to incorporating servers or creating networks for employees and clients near and far, they can make managing your technology easy.last_img read more

Accelerating AI and Deep Learning with Dell EMC Isilon and NVIDIA GPUs

first_imgFigure 1: Reference ArchitectureDeep Learning WorkflowAs visualized in figure 2, DL usually consist of two distinct workflows, model development and inference.Figure 2: Common DL Workflows: Model development and inferenceThe workflow steps are defined and detailed below.Ingest Labeled Data – In this step, the labeled data (e.g. images and their labels which indicate whether the image contains a dog, cat, or horse.) are ingested into the Isilon storage system. Data can be ingested via NFS, SMB, and HDFS protocols.Transform – Transformation includes all operations that are applied to the labeled data before they are passed to the DL algorithm. It is sometimes referred to as preprocessing. For images, this often includes file parsing, JPEG decoding, cropping, resizing, rotation, and color adjustments. Transformations can be performed on the entire dataset ahead of time, storing the transformed data on Isilon storage. Many transformations can also be applied in a training pipeline, avoiding the need to store the intermediate data.Train Model – In this phase, the model parameters are learned from the labeled data stored on Isilon. This is done through a training pipeline shown in 3 consisting of the following: This blog was co-authored by Jacci Cenci, Sr. Technical Marketing Engineer, NVIDIAOver the last few years, Dell EMC and NVIDIA have established a strong partnership to help organizations accelerate their AI initiatives. For organizations that prefer to build their own solution, we offer Dell EMC’s ultra-dense PowerEdge C-series, with NVIDIA’s TESLA V100 Tensor Core GPUs, which allows scale-out AI solutions from four up to hundreds of GPUs per cluster. For customers looking to leverage a pre-validated hardware and software stack for their Deep Learning initiatives, we offer Dell EMC Ready Solutions for AI: Deep Learning with NVIDIA, which also feature Dell EMC Isilon All-Flash storage.  Our partnership is built on the philosophy of offering flexibility and informed choice across a broad portfolio.To give organizations even more flexibility in how they deploy AI with breakthrough performance for large-scale deep learning Dell EMC and NVIDIA have recently collaborated on a new reference architecture that combines the Dell EMC Isilon All-Flash scale-out NAS storage with NVIDIA DGX-1 servers for AI and deep learning (DL) workloads.To validate the new reference architecture, we ran multiple industry-standard image classification benchmarks using 22 TB datasets to simulate real-world training and inference workloads. This testing was done on systems ranging from one DGX-1 server, all the way to nine DGX-1 servers (72 Tesla V100 GPUs) connected to eight Isilon F800 nodes.This blog post summarizes the DL workflow, the training pipeline, the benchmark methodology, and finally the results of the benchmarks.Key components of the reference architecture shown in figure 1 include:Dell EMC Isilon All-Flash scale-out NAS storage delivers the scale (up to 33 PB), performance (up to 540 GB/s), and concurrency (up to millions of connections) to eliminate the storage I/O bottleneck keeping the most data hungry compute layers fed to accelerate AI workloads at scale.NVIDIA DGX-1 servers which integrate up to eight NVIDIA Tesla V100 Tensor Core GPUs fully interconnected in a hybrid cube-mesh topology. Each DGX-1 server can deliver 1 petaFLOPS of AI performance, and is powered by the DGX software stack which includes NVIDIA-optimized versions of the most popular deep learning frameworks, for maximized training performance. Validate Model – Once the model training phase completes with a satisfactory accuracy, you’ll want to measure the accuracy of it on validation data stored on Isilon – data that the model training process has not seen. This is done by using the trained model to make inferences from the validation data and comparing the result with the label. This is often referred to as inference but keep in mind that this is a distinct step from production inference.Production Inference – The trained and validated model is then often deployed to a system that can perform real-time inference. It will accept as input a single image and output the predicted class (dog, cat, horse). Note that the Isilon storage and DGX-1 server architecture is not intended for and nor was it benchmarked for production inference.Benchmark Methodology SummaryIn order to measure the performance of the solution, various benchmarks from the TensorFlow Benchmarks repository were carefully executed. This suite of benchmarks performs training of an image classification convolutional neural network (CNN) on labeled images. Essentially, the system learns whether an image contains a cat, dog, car, train, etc.The well-known ILSVRC2012 image dataset (often referred to as ImageNet) was used. This dataset contains 1,281,167 training images in 144.8 GB[1]. All images are grouped into 1000 categories or classes. This dataset is commonly used by deep learning researchers for benchmarking and comparison studies.When running the benchmarks on the 148 GB dataset, it was found that the storage I/O throughput gradually decreased and became virtually zero after a few minutes. This indicated that the entire dataset was cached in the Linux buffer cache on each DGX-1 server. Of course, this is not surprising since each DGX-1 server has 512 GB of RAM and this workload did not significantly use RAM for other purposes. As real datasets are often significantly larger than this, we wanted to determine the performance with datasets that are not only larger than the DGX-1 server RAM, but larger than the 2 TB of coherent shared cache available across the 8-node Isilon cluster. To accomplish this, we simply made 150 exact copies of each image archive file, creating a 22.2 TB dataset.Benchmark ResultsFigure 5: Image classification training with original 113 KB imagesThere are a few conclusions that we can make from the benchmarks represented above.Image throughput and therefore storage throughput scale linearly from 8 to 72 GPUs.The maximum throughput that was pulled from Isilon occurred with ResNet50 and 72 GPUs. The total storage throughput was 5907 MB/sec.For all tests shown above, each GPU had 97% utilization or higher. This indicates that the GPU was the bottleneck.The maximum CPU utilization on the DGX-1 server was 46%. This occurred with ResNet50.Large Image TrainingThe benchmarks in the previous section used the original JPEG images from the ImageNet dataset, with an average size of 115 KB. Today it is common to perform DL on larger images. For this section, a new set of image archive files are generated by resizing all images to three times their original height and width. Each image is encoded as a JPEG with a quality of 100 to further increase the number of bytes. Finally, we make 13 copies of each image archive file. This results in a new dataset that is 22.5 TB and has an average image size of 1.3 MB.Because we are using larger images with the best JPEG quality, we want to match it with the most sophisticated model in the TensorFlow Benchmark suite, which is Inception-v4.Note that regardless of the image height and width, all images must be cropped and/or scaled to be exactly 299 by 299 pixels to be used by Inception-v4. Thus, larger images place a larger load on the preprocessing pipeline (storage, network, CPU) but not on the GPU.The benchmark results in Figure 5 were obtained with eight Isilon F800 nodes in the cluster.Figure 6: Image classification training with large 1.3 MB imagesAs before, we have linear scaling from 8 to 72 GPUs. The storage throughput with 72 GPUs was 19,895 MB/sec. GPU utilization was at 98% and CPU utilization was at 84%.ConclusionHere are some of the key findings from our testing of the Isilon and NVIDIA DGX-1 server reference architecture:Achieved compelling performance results across industry standard AI benchmarks from eight through 72 GPUs without degradation to throughput or performance.Linear scalability from 8-72 GPUs delivering up to 19.9 GB/s while keeping the GPUs pegged at >97% utilization.The Isilon F800 system can deliver up to 96% throughput of local memory, bringing it extremely close to the maximum theoretical performance limit an NVIDIA DGX-1 system can achieve.Isilon-based DL solutions deliver the capacity, performance, and high concurrency to eliminate the IO storage bottlenecks for AI. This provides a rock-solid foundation for large scale, enterprise-grade DL solutions with a future proof scale-out architecture that meets your AI needs of today and scales for the future.If you are interested in learning more about this, be sure to see the Dell EMC Isilon and NVIDIA DGX-1 servers for deep learning whitepaper. You’ll find the complete benchmark methodology as well as results for batch inference for model validation. It also contains the complete reference architecture including hardware and software configuration, networking, sizing guidance, performance measurement tools, and some useful scripts.[1] All unit prefixes use the SI standard (base 10) where 1 GB is 1 billion bytes.______________________________________________________________________________________________________________________________________________________About the co-authorJacci CenciTechnical Marketing Engineer, NVIDIAJacci has worked for the past two years at NVIDIA with partners and customers to support accelerated computing and deep learning requirements. Prior to NVIDIA, Jacci spent four years as a data center consultant focused on machine learning, big data analytics, technical computing, and enterprise solutions at Dell EMC. Figure 3: Training pipelinePreprocessing – The preprocessing pipeline uses the DGX-1 server CPUs to read each image from Isilon storage, decode the JPEG, crop and scale the image, and finally transfer the image to the GPU. Multiple steps on multiple images are executed concurrently. JPEG decoding is generally the most CPU-intensive step and can become a bottleneck in certain cases.Forward and Backward Pass – Each image is sent through the model. In the case of image classification, there are several prebuilt structures of neural networks that have been proven to work well. To provide an example, Figure 3 below shows the high-level workflow of the Inception-v3 model which contains nearly 25 million parameters that must be learned. In this diagram, images enter from the left and the probability of each class comes out on the right. The forward pass evaluates the loss function (left to right) and the backward pass calculates the gradient (right to left). Each image contains 150,528 values (224*224*3) and the model performs hundreds of matrix calculations on millions of values. The NVIDIA Tesla GPU performs these matrix calculations quickly and efficiently. Figure 4: Inception v3 model architectureOptimization – All GPUs across all nodes exchange and combine their gradients through the network using the All Reduce algorithm. The communication is accelerated using NCCL and NVLink, allowing the GPUs to communicate through the Ethernet network, bypassing the CPU and PCIe buses. Finally, the model parameters are updated using the gradient descent optimization algorithm.Repeat until the desired accuracy (or another metric) is achieved. This may take hours, days, or even weeks. If the dataset is too large to cache, it will generate a sustained storage load for this duration.last_img read more

Bus in deadly Grand Canyon crash offers tours by comedians

first_imgAuthorities say a tour bus that rolled over and killed one person at the Grand Canyon was operated by a Las Vegas company that offers tours guided by comedians. A sheriff’s office on Monday identified Shelley Ann Voges of Boonville, Indiana, as the person who died in the crash. More than 40 people were on the bus operated by Comedy On Deck. Three patients who suffered critical injuries are now listed in stable condition. Forty others were released from an Arizona hospital. No one picked up the phone when The Associated Press called the tour company Monday.last_img read more

CDC requires face masks on airlines, public transportation

first_imgATLANTA (AP) — A new federal requirement for wearing face masks on airline flights and public transportation takes effect on Monday. The Centers for Disease Control and Prevention issued an order that backs up one announced by President Joe Biden shortly after he took office. The CDC order says passengers on planes, trains, subways, buses, taxis and ride-shares must wear a mask over their nose and mouth. The order extends to waiting areas like airports and subway stations. The CDC is telling airline and transit crews to deny transportation to people who won’t mask up.last_img read more