Open Cloud

Traditionally, Sherlock has focused on compliance and data security. While this will always be at the forefront of Sherlock’s initiatives, we recognize that there are additional challenges our customers face with respect to the adoption of Cloud platforms including the learning curve, orchestrating the vast number of related resources and tools, and meeting cost parameters. Sherlock minimizes these challenges by providing value added capabilities that automate the deployment, integration and operation of complex applications, while simultaneously optimizing the cost of Cloud resources consumed.

Vyloc Cloud, Sherlock's latest managed cloud offering, provides turnkey solutions that span the entire data lifecycle pipeline including ingesting & integration, storage, compute and apps. Built on top of public cloud platforms, including AWS and Azure, it orchestrates the necessary micro-services to provide fit-for-purpose solutions that support big data, analytics and dates science use cases. These services are designed for customers looking to leverage technology solutions that are turnkey in nature, and prefer a more hands-off approach when it comes to the intricacies of the capability offered by public cloud platforms.

Sherlock Vendor Update UCDavis.jpg

Sherlock offers pre-built templates that deploy solutions in support of various use cases. Currently the following tools are available through the Innovation Accelerator Platform series. Sherlock plans to release the Innovation Accelerator Platforms in phases in an effort to continuously align and cater to the evolving needs of the research community.

Sherlock Vendor Update UCDavis.jpg

1. AWS Elastic Map Reduce (EMR) with Spark

EMR.png
Spark.png
  • Provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable EC2 instances.

  • Leverages multiple data stores, including Amazon S3 and the Hadoop Distributed File System (HDFS). Additionally, with the EMR File System (EMRFS), EMR can efficiently and securely use Amazon S3 as an object store for Hadoop.

  • Includes Apache Spark, a unified analytics engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing.

  • Securely and reliably handles a broad set of big data use cases, including log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

  • Deployed in minutes with automation to enable infrastructure provisioning, security groups setup, encryption, cluster setup, Hadoop configuration, and cluster tuning.


2. Jupyter Notebook with IPython/StarCluster/PySpark/AWS-Athena

500px-Jupyter_logo.svg.png
  • Jupyter Notebook configured with a variety of compute and storage options.

  • Leverages multiple data stores, including Amazon S3.

  • Offering high-end compute including use of GPU-based compute capability.

  • Single node or cluster based compute.

  • Deployed in minutes with automation to enable infrastructure provisioning, security groups setup, encryption, cluster setup, Hadoop configuration, and cluster tuning.