HubDiscovery

HubDiscovery | HubBucket Discovery

Website Under Construction

HubDiscovery is a Machine Learning - ML and Deep Learning Continous Integration and Continous Delivery - CI/CD, Site Reliability Engineering - SRE and Development and Operations - DevOps Health Information Technology - HealthIT platform.


HubDiscovery is an Intelligent Health Information Technology - HealthIT platform that has integrated technology and methodologies:

  • Machine Learning - ML models and algorithms
  • Deep Learning - DL models and algorithms
  • Continuous Integration  and Continous Delivery - CI/CD
  • Site Reliability Engineering - SRE
  • Development and Operations - DevOps
  • Data Mining and Data Cleaning

HubDiscovery improves Information - IT operations and Software Design and Development interoperability for Healthcare Providers / Healthcare Organizations.

  • Health Information Technology - HealthIT
  • Health Technology - HealthTech
  • Medical Technology - MedTech
  • Mobile Health Technology - mHealth
  • Telemedicine / Telehealth

Furthermore, ...

Site Reliability Engineering - SRE

Site Reliability Engineering - SRE / Service Reliability Engineering - SRE

Site Reliability Engineering - SRE is a discipline that incorporates aspects of Software Engineering and applies them to Information Technology - IT infrastructure and operations problems. The main goals are to create scalable and highly reliable software systems.

A Site Reliability Engineer - SRE will spend up to 50% of their time doing Information Technology - IT operations related work such as issues, on-call, and manual intervention. Since the software system that an SRE oversees is expected to be highly automatic and self-healing, the SRE should spend the other 50% of their time on development tasks such as new features, scaling or automation. The Site Reliability Engineer - SRE is either a Software Engineer with a good IT administration background or a highly skilled System Administrator with knowledge of coding and automation.


Site Reliability Engineering - SRE / Service Reliability Engineering - SRE

SRE removes the conjecture and debate over what can be launched and when. It introduces a mathematical formula for green- or red-lighting launches and dedicates a team of people with Information Technology - IT operations - Ops skills (appropriately called Service Reliability Engineers, or SRE’s) to continuously oversee the reliability of the product.


How Site Reliability Engineering - SRE works

Firstly, new launches are green-lighted based on current product performance.

Most applications don’t achieve 100% uptime. So for each service, the SRE team sets a Service Level Agreement - SLA that defines how reliable the system needs to be to end-users. If the team agrees on a 99.9% SLA, that gives them an error budget of 0.1%. An error budget is exactly as it’s named: it’s the maximum allowable threshold for errors and outage.

The development team can “spend” this error budget in any way they like. If the product is currently running flawlessly, with few or no errors, they can launch whatever they want, whenever they want. Conversely, if they have met or exceeded the error budget and are operating at or below the defined SLA, all launches are frozen until they reduce the number of errors to a level that allows the launch to proceed. Both the SREs and developers have a strong incentive to work together to minimize the number of errors.

Secondly, Site Reliability Engineers - SREs can code as well

Both the development and SRE teams share a single staffing pool, so for every SRE that is hired, one less developer headcount is available (and vice versa). This ends the never-ending headcount battle between Dev and Ops and creates a self-policing system where developers get rewarded with more teammates for writing better-performing code (i.e., code that needs less support from fewer SREs).

One of the core principles mandates that SRE’s can only spend 50% of their time on Information Technology operations work. As much of their time as possible should be spent writing code and building systems to improve performance and operational efficiency.


Software Developers role in Site Reliability Engineering

The Software Development team handles 5% of all Information Technology - IT operations workload (handling tickets, providing on-call support, etc.). This allows them to stay closely connected to their product, see how it is performing, and make better coding and release decisions. In addition, any time the operations load exceeds the capacity of the SRE team, the overflow always gets assigned to the developers. When the system is working well, the developers begin to self-regulate here as well, writing strong code and launching carefully to prevent future issues.

Development and Operations - DevOps

Development and Operations - DevOps is an Enterprise Software Development phrase used to mean a type of agile relationship between Software Development and Information Technology - IT operations. The goal of DevOps is to change and improve the relationship by advocating better communication and collaboration between these two business units.

How Development and Operations - DevOps Works

Under a DevOps model, Software Development and Information Technology - IT Operations teams are no longer “siloed.” Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single function.

In some Development and Operations - DevOps models, quality assurance and security teams may also become more tightly integrated with development and operations and throughout the Application Lifecycle. When Cyber Security is the focus of everyone on a Development and Operations - DevOps team, this is sometimes referred to as Development, Security and Operations - DevSecOps.

These teams use practices to automate processes that historically have been manual and slow. They use a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks (for example, deploying code or provisioning infrastructure) that normally would have required help from other teams, and this further increases a team’s velocity.


Development and Operations - DevOps includes

Continuous Integration

Continuous integration is a DevOps software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. Continuous integration most often refers to the build or integration stage of the software release process and entails both an automation component (e.g. a CI or build service) and a cultural component (e.g. learning to integrate frequently). The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.

Why is Continuous Integration Needed?

In the past, developers on a team might work in isolation for an extended period of time and only merge their changes to the master branch once their work was completed. This made merging code changes difficult and time-consuming, and also resulted in bugs accumulating for a long time without correction. These factors made it harder to deliver updates to customers quickly.

How does Continuous Integration Work?

With continuous integration, developers frequently commit to a shared repository using a version control system such as Git. Prior to each commit, developers may choose to run local unit tests on their code as an extra verification layer before integrating. A continuous integration service automatically builds and runs unit tests on the new code changes to immediately surface any errors.


Continuous Delivery - CD

Continuous delivery is a software development practice where code changes are automatically prepared for a release to production. A pillar of modern application development, continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When properly implemented, developers will always have a deployment-ready build artifact that has passed through a standardized test process.

Continuous delivery lets developers automate testing beyond just unit tests so they can verify application updates across multiple dimensions before deploying to customers. These tests may include UI testing, load testing, integration testing, Application Programming Interface - API reliability testing, etc. This helps developers more thoroughly validate updates and pre-emptively discover issues. With the cloud, it is easy and cost-effective to automate the creation and replication of multiple environments for testing, which was previously difficult to do on-premises.

Continuous Delivery vs. Continuous Deployment

With continuous delivery, every code change is built, tested, and then pushed to a non-production testing or staging environment. There can be multiple, parallel test stages before production deployment. The difference between continuous delivery and continuous deployment is the presence of a manual approval to update to production. With continuous deployment, production happens automatically without explicit approval.


Microservices

The Microservices Architecture is a design approach to build a single application as a set of small services. Each service runs in its own process and communicates with other services through a well-defined interface using a lightweight mechanism, typically an HTTP-based Application Programming Interface - API. Microservices are built around business capabilities; each service is scoped to a single purpose. You can use different frameworks or programming languages to write microservices and deploy them independently, as a single service, or as a group of services.


Infrastructure as Code

Infrastructure as code is a practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration. The cloud’s API-driven model enables developers and system administrators to interact with infrastructure programmatically, and at scale, instead of needing to manually set up and configure resources. Thus, engineers can interface with infrastructure using code-based tools and treat infrastructure in a manner similar to how they treat application code. Because they are defined by code, infrastructure and servers can quickly be deployed using standardized patterns, updated with the latest patches and versions, or duplicated in repeatable ways.


Configuration Management

Developers and system administrators use code to automate the operating system and host configuration, operational tasks, and more. The use of code makes configuration changes repeatable and standardized. It frees developers and systems administrators from manually configuring operating systems, system applications, or server software.


Policy as Code

With infrastructure and its configuration codified with the cloud, organizations can monitor and enforce compliance dynamically and at scale. Infrastructure that is described by code can thus be tracked, validated, and reconfigured in an automated way. This makes it easier for organizations to govern changes over resources and ensure that security measures are properly enforced in a distributed manner (e.g. information security or compliance with PCI-DSS, HIPAA, HITECH, ACA, GDPR, California CCPA, NYS Data Protection Privaccy Laws, etc.). This allows teams within an organization to move at higher velocity since non-compliant resources can be automatically flagged for further investigation or even automatically brought back into compliance.


Monitoring and Logging

Organizations monitor metrics and logs to see how application and infrastructure performance impacts the experience of their product’s end-user. By capturing, categorizing, and then analyzing data and logs generated by applications and infrastructure, organizations understand how changes or updates impact users, shedding insights into the root causes of problems or unexpected changes. Active monitoring becomes increasingly important as services must be available 24/7 and as application and infrastructure update frequency increases. Creating alerts or performing real-time analysis of this data also helps organizations more proactively monitor their services.


Communication and Collaboration

Increased communication and collaboration in an organization are one of the key cultural aspects of DevOps. The use of DevOps tooling and automation of the software delivery process establishes collaboration by physically bringing together the workflows and responsibilities of development and operations. Building on top of that, these teams set strong cultural norms around information sharing and facilitating communication through the use of chat applications, issue or project tracking systems, and wikis. This helps speed up communication across developers, operations, and even other teams like marketing or sales, allowing all parts of the organization to align more closely on goals and projects.

Welcome to the HubBucket Discovery ("HubDiscovery") website.