Cloud Security Archives | HealthTech Magazines https://www.healthtechmagazines.com/category/cloud-security/ Transforming Healthcare Through Technology Insights Tue, 03 Oct 2023 14:51:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.healthtechmagazines.com/wp-content/uploads/2020/02/HealthTech-Magazines-150x150.jpg Cloud Security Archives | HealthTech Magazines https://www.healthtechmagazines.com/category/cloud-security/ 32 32 How the Cloud is Transforming Healthcare https://www.healthtechmagazines.com/how-the-cloud-is-transforming-healthcare/ Thu, 07 Jul 2022 13:52:04 +0000 https://www.healthtechmagazines.com/?p=6015 By Tsvi Gal, VP, Enterprise Technology Services and Atti Riazi, SVP & CIO, Memorial Sloan Kettering Cancer Center Healthcare brings

The post How the Cloud is Transforming Healthcare appeared first on HealthTech Magazines.

]]>

By Tsvi Gal, VP, Enterprise Technology Services and Atti Riazi, SVP & CIO, Memorial Sloan Kettering Cancer Center

Healthcare brings unique challenges to technology adoption. The data representing people, diseases, and medicine is complex and voluminous. Consumers and providers of healthcare desire easy-to-use and increasingly mobile solutions and need more virtual options to keep up with the ever-changing landscape, while researchers want safe access to data at scale to make discoveries that increase knowledge and improve care. The healthcare industry needs solutions that provide a simple yet engaging experience, as well as mechanisms to leverage massive amounts of complex data safely – all with a level of reliability appropriate when dealing with the health and wellbeing of individuals and communities. On top of that, for non-profits like Memorial Sloan Kettering Cancer Center (MSK), these solutions and mechanisms cannot be cost-prohibitive. Collectively, these needs seem daunting, but cloud technology is proving to be capable of making innovation more efficient and agility more feasible.

Atti Riazi, SVP & CIO, Memorial Sloan Kettering Cancer Center
Atti Riazi, SVP & CIO, Memorial Sloan Kettering Cancer Center

Cloud technology has become an engine for transformative change in a business setting. Hyperscale cloud platforms offer unparalleled compute capabilities, offering a seemingly unlimited capacity that can scale dynamically. Additionally, this technology includes rapidly evolving services that take care of more and more infrastructure heavy-lifting, enabling a concerted focus on feature functionality. Little upfront or capital investments are needed, and you only pay for what you use, making experimentation more feasible. This combination of elastic scalability, on-demand advanced functionality, and minimal commitment makes the cloud the ultimate fuel for transformation, especially when coupled with a rich ecosystem of SaaS offerings.

At MSK, we embarked on a cloud journey that emphasized innovation. This journey was based on the premise that providing our clinical and research technology teams access to cloud services with minimal friction while adhering to standards would result in innovative solutions. Cloud services can easily handle the volume and variety of healthcare data while still meeting institutional needs for security and reliability. However, we have already experienced how the cloud enables us to respond rapidly to the changing needs of healthcare and create solutions that allow us to reach new markets with fewer physical presence requirements.

Almost overnight, the demands of our telemedicine options exploded where the cloud played a vital role in operations.

We are aware that, beyond MSK, there is potential for ramifications globally. Atti Riazi says, “Although the cloud helps lower the volume of greenhouse gases by reducing the number of servers used by so many of us, much of the electricity needed to maintain the cloud, unfortunately, is derived from fossil fuels and coal. We as technologists must take a position on the impact of technology on the environment, especially e-waste. We need to be a society and a collective of organized responsibility as technologists and innovators.”

In a regulated industry like healthcare, security and governance are must-haves. Our institution needed effective cloud governance and controls provided by default and mechanisms to ensure security was baked into the design of each cloud application. Next, we needed to level-up the cloud technical skills of our existing teams, and supplement them with cloud experts acquired in today’s highly competitive market. We also needed to balance our desire for agility and innovation with fiscal responsibility and transparency, understanding the risks of adopting the cloud with a blank-check mindset. And above all, we needed to maintain our standard of premium care.

While our cloud adoption approach includes many facets, a few core pillars have proved essential to the success we have experienced to date. A central cloud platform team was created to build a common foundational layer in our two clouds—AWS and Azure—that provide standardized configurations of our cloud accounts and shared services for cloud application teams to consume. We then established a Cloud Center of Excellence (CCoE) as a central body to drive cloud policies and standards in close collaboration with our compliance, cybersecurity, cloud platform, and cloud application teams, as well as initiate FinOps practices and organize a cloud community of practice.

During the COVID-19 pandemic, many clinical services needed to pivot to a more virtual experience for the safety of our patients and care providers. Almost overnight, the demands of our telemedicine options exploded where the cloud played a vital role in operations. In response, a plan was formed to reimagine our telemedicine solution as a cloud-native application, capable of dynamically scaling with demand, leveraging the built-in reliability and security of cloud services, and providing a rich, integrated experience for the patient and the care provider. This new telemedicine solution would use Microsoft Teams and cutting-edge Azure services such as Azure Communications Service and the Azure Bot framework. An agile approach coupled with close collaboration with our Microsoft Azure team would enable the rapid and iterative development needed to deploy quickly.

In a matter of months, our telemedicine offering was transformed. The central cloud platform team implemented our cloud platform on Azure to provide standard networking, audit logging, security controls, and dedicated cloud accounts for all the telemedicine environments. The telemedicine application team developed the infrastructure code (IaC) and application code for the new solution, fully automating the deployment via Azure DevOps pipelines. This empowered the application team to experiment with various Azure services, finding the right mix to meet their functional and non-functional requirements. This new telemedicine solution now serves 30% of our outpatient visits, up from 1% before COVID-19, and will scale up and down as demand changes and the pandemic evolves.

In retrospect, we have experienced positive impacts of cloud technology in healthcare with some challenges. Our development teams are more empowered and building valuable new skills. When they can spend more time focused on user needs and have robust cloud services in their toolbox, innovative solutions can be deployed rapidly. However, cloud technology is complex and evolves quickly. Our code-first DevOps approach asked our developers to learn new skills and own things in a way they did not before. The collaborative culture at MSK and the support of Microsoft has eased this task, but we understand learning must be continuous.

And then there are those global ramifications to keep an eye on.

Running our telemedicine solution on cloud-native services has given us unprecedented visibility into the costs and usage, which creates unique opportunities to optimize code to improve performance and increase cost-efficiency.

The post How the Cloud is Transforming Healthcare appeared first on HealthTech Magazines.

]]>
Smart and Secure PACS (Picture Archiving and Communication System) on Clouds https://www.healthtechmagazines.com/smart-and-secure-pacs-picture-archiving-and-communication-system-on-clouds/ Tue, 07 Jun 2022 13:35:37 +0000 https://www.healthtechmagazines.com/?p=6003 By Synho Do, Ph.D., Director, Laboratory of Medical Imaging and Computation, Assistant Professor, Massachusetts General Hospital and Harvard Medical School

The post Smart and Secure PACS (Picture Archiving and Communication System) on Clouds appeared first on HealthTech Magazines.

]]>

By Synho Do, Ph.D., Director, Laboratory of Medical Imaging and Computation, Assistant Professor, Massachusetts General Hospital and Harvard Medical School

With the digitization of healthcare comes the discovery of newly developed areas of Artificial Intelligence (AI) and Machine Learning (ML). Converting film to digital pictures in Radiology is equivalent to developing a fully electric car. While the initial worries and concerns of digital radiography create uncertainty of the unknown, those worries seemingly disappear when looking at the current success and essential role in healthcare. An undersized initial investment has opened up a method that can have an immediate impact and create further advancement in the field.

Digitally well-organized data can be easily processed with an algorithm called CNN (Convolutional Neural Network). Using a GPU (Graphics Processing Unit) capable of high-speed operation through parallel processing, it is possible to develop an AI algorithm with a basic programming skillset and simple understanding of models that accurately predict complex outcomes. One prerequisite is data well organized and clearly labeled as accurately as possible. In order to create the most optimal processing, the quality of data curation is more necessary than the quantity of data available.

When the previously developed Cryptographic technologies are tested and implemented in the healthcare data flow, value generation through AI algorithm will become possible.

Before the arrival of modern AI, problems were solved through conventional solution methods like mathematics and engineering in a well-organized format. However, in the latest approach, given a case of organized data, it is possible to create an algorithm by discovering unknown weighting in the topology of CNN architecture. In general, a complex neural network composed of many layers has adequate performance. Though more data is needed to train this network, in this case, the quantity of the data plays a crucial role in estimating weights in the AI/ML model.

And so, if it is necessary to determine the rank of importance, the quality of data comes first, followed by the amount of data. The third essential aspect is high-performance computing power that is able to process not only complex, but also large amounts of data. Many startups, hospitals, and governments are now well aware of the importance of data and are developing data-oriented science platforms supported by expertise, copious data, massive computational power, as well as funding from outside of the hospital.

Cloud solutions such as Amazon, Google, Microsoft, and Nvidia are trying to provide a high capacity while also achieving a secure computing environment. The advantage of the cloud is its easy scalability. Users do not have to worry about hardware problems. There are also various models of the price according to its usage.

Currently, uploading hospital data to the cloud seems to hold unnecessary risk as the system is very unpredictable. From a hospital’s perspective, the risk of leaking sensitive patient information holds a more serious danger as not only would the act be unethical, but it would also cause the group to be legally responsible. The risk is simply too high. Nevertheless, a more convenient and secure method in which patients can safely store their own data can be created. 

In part, personal healthcare information (PHI) can be collected through mobile devices or IoT (Internet of Things), linked with healthcare apps in the cloud. Many AI-related business models are being proposed, and to implement them, there is currently a race to obtain data of high quality and sufficient volume. Ultimately, the source and ownership of data become a new issue to consider. Could it be possible to propose a method to compensate patients according to the contribution of fairly distributed data? If this method is transparent and safe, will people voluntarily trust and use it?

While the issues that currently exist in the hospital setting seem relatively new, the same issue has been around for a long time in cryptography with solutions found. In many cases, innovation solves our problems by sharing successful technology. Techniques that work well in one field can be easily translated to another and it is the translation of one to another that we call growth in technology.

Cryptography, which studies how to encrypt data to send and receive information safely, started from the mathematical basis of number theory and is used in many application fields in a practical sense.

  • How do you authenticate with the other party without leaking any information? Zero-knowledge authentication.
  • Is it possible to confirm that even a tiny amount of data does not change in a massive amount of data? Hashing
  • Can I confirm to others that I am the real me? DID (Decentralized Identifiers)
  • Can everyone confirm that I am the valid owner of this digital content? NFT (Non-fungible token)
  • Can you trust the result of the two data calculated after encrypting the critical data without exposing it? FHE (Fully Homomorphic Encryption)

When the previously developed Cryptographic technologies are tested and implemented in the healthcare data flow, value generation through AI algorithm will become possible. Those who provide data will be rewarded according to the accuracy and rarity of their data. Experts who label data can also be rewarded according to accuracy and difficulty. In addition, data scientists and entrepreneurs who safely manage and process data can be rewarded according to the performance and benefit of AI solutions. While the end goal will be a solution to our flawed hospital data systems, significant changes will come from many small successes.

The post Smart and Secure PACS (Picture Archiving and Communication System) on Clouds appeared first on HealthTech Magazines.

]]>
Developing a Cloud Security Strategy https://www.healthtechmagazines.com/developing-a-cloud-security-strategy/ Mon, 19 Apr 2021 12:40:33 +0000 https://www.healthtechmagazines.com/?p=4811 By Shefali Mookencherry, MPH, MSMIS, RHIA, CHPS, HCISPP, CISO, Edward-Elmhurst Health The mitigation of security risks in cloud computing is

The post Developing a Cloud Security Strategy appeared first on HealthTech Magazines.

]]>

By Shefali Mookencherry, MPH, MSMIS, RHIA, CHPS, HCISPP, CISO, Edward-Elmhurst Health

The mitigation of security risks in cloud computing is a challenge to many healthcare organizations. As organizations move to the cloud more frequently, cloud security is a major concern for CIOs and CISOs.  

Most organizations fear the loss of control by moving to the cloud. The discussion around security risk in the cloud requires organizations to find the difference between real risk and uneasiness. They may worry about losing control of the data and fear the risks that come with this approach.

Organizations need to invest time to develop a cloud security strategy.

Cloud Security Strategy

What follows is a high-level cloud strategy for evaluating security risks and identifying what an organization should consider when mitigating those risks.

Review cloud risk implications
  • Review all areas of risk. Decision-makers contemplating cloud computing adoption face several challenges relating to policy, procedures, technology, guidance, security, and standards. In the cloud, data is entrusted to a third party and shares tenancy with other people’s data requiring stringent access security. Regulatory compliance might require visibility into where data is stored and who has access.

  • Discuss cloud risk implications and stakeholder concerns. Most cloud projects are driven by IT and focus on specific technologies. To deliver organizational value and minimize risk exposure, such initiatives should be aligned to the organization’s business strategies. The Board or risk management governance committees’ active engagement and oversight are essential prerequisites for the success of a cloud security program.
Mitigate cloud security and compliance threats
  • Identify key security threats in the cloud. Security risks of cloud computing may include compliance violations, identity theft, malware infections, data breaches, diminished customer trust, and potential revenue loss. 

  • Evaluate the role and limits of assurance. Identify and review data storage issues raised by multi-tenancy, such as how different clients’ assets are segregated and what assurances about separation can be provided to the data owners.

  • Review compliance requirements. Legal opinion should be sought to ensure that regulatory requirements are addressed for specific organizational needs related to HIPAA, PCI-DSS, and other security regulations.

  • Develop an action plan of steps to mitigate cloud security threats. Consider utilizing a Single Sign-on solution and implementing end-to-end encryption.
Address cloud availability and reliability challenges
  • Discuss availability and reliability challenges. Confirm that the cloud solution will be available and reliable; consider continuity planning and ensure that a defined set processes are in place to manage and reclaim data should the service cease permanently.

  • Review current recovery capabilities and requirements. Ensure the organization has the required network connectivity, bandwidth, and proper technology to enable adequate services from a cloud provider. For example, when the internal network goes down or becomes unstable, employees cannot access any applications hosted on the cloud.

  • Assess mitigation tactics for availability and reliability risks. Organizations have to make sure that security is built into their cloud infrastructure – which includes selecting the right cloud deployment option from a supplier who can offer the right security measures.
Assess cloud integration challenges
  • Understand integration challenges in the cloud. Integration plays a vital role in the cloud as it ensures that applications, infrastructure, and data with interdependencies maintain their connections.

  • Review current integration processes and plans. Consider cloud service brokers, which provide an intermediate layer between multiple cloud vendors and users while offering services such as selection, aggregation, integration, performance management, and security. They should be able to unify legacy services and new multi-sourced cloud-based offers into a common management platform and provision preconfigured applications as part of a service integration solution.

  • Outline application relationships and integration challenges. Review list of applications that may potentially move to the cloud. Determine if any applications should remain on-premise.
Identify the impact on internal infrastructure
  • Identify impacted infrastructure and data components. Putting in place the right enterprise architecture framework, which contains the processes, products, tools, and techniques needed to create a complete IT system architecture for all infrastructure.

  • Determine cloud infrastructure requirements. A security architecture model may be useful during security architecture design. Conceptual security services can be grouped into high-level areas such as hosting, security governance, compliance, integrity, availability, cryptography, risk, and access management.
Identify required staff resources and costs
  • Understand the shift in IT responsibility. To effectively manage cloud service providers and appropriately staff the internal IT department, IT expertise and roles must be inventoried and appropriately resourced. Identify changes to existing staff resourcing.

  • Review Service Level Agreements (SLAs) and contract. The help desk team may need to contact vendors directly to log tickets or go through the IT vendor management team. SLAs should clearly set expectations with users about the time to resolve issues. If these expectations are realistic and based on vendor SLAs, end-users should remain satisfied. Some internal SLAs may need to change to accommodate new cloud vendor SLAs.

  • Evaluate Total Cost of Ownership (TCO) for cloud services. A TCO analysis involves creating a breakdown of expenses related to implementing cloud services. These costs are generally divided into four categories: ISP bills, staffing, hardware, and software. Vendors may provide calculators to help determine costs but consider the physical environment, application, process, and people. Keep in mind the different cloud deployment and support models, noting which models might be the best for you, as they affect the cost.

Although this list is a high-level review of a cloud security strategy, organizations should gain buy-in from senior leadership. Providing awareness training to senior leadership and the Board may improve chances for cloud computing adoption. Cloud computing presents many security issues. The organization should understand its respective role and the security issues inherent in cloud computing.

The post Developing a Cloud Security Strategy appeared first on HealthTech Magazines.

]]>
Active Cyber Defenses for Third Party Risk Management https://www.healthtechmagazines.com/active-cyber-defenses-for-third-party-risk-management/ Thu, 15 Apr 2021 12:46:14 +0000 https://www.healthtechmagazines.com/?p=4808 By John Keenan, Director, IS e-Security, Memorial Hospital at Gulfport Actions that disrupt business or governmental operations in the cyber

The post Active Cyber Defenses for Third Party Risk Management appeared first on HealthTech Magazines.

]]>

By John Keenan, Director, IS e-Security, Memorial Hospital at Gulfport

Actions that disrupt business or governmental operations in the cyber domain have often been referred to as cyber warfare, and more recently, as cybercrime. As a result, the program that we’ve implemented has evolved by combining deep healthcare information security tactics, techniques, and procedures with an executive vision that relies heavily on the military decision-making process and the tools that the military teaches all of its operational planners to use when building adaptive plans. We start with a continuous effort to understand how critical assets in the cyber domain are woven into the fabric of the business form the foundation of our entire information security program. Then we apply concepts like self-evaluation of defensive positions (KOCOA), evaluation of enemy capabilities (SALUTE reports), and potential actions (EMPCOA) to understand our risk whenever an attack happens. An exploit is found in the wild, a new vulnerability is announced, or a new element like an information exchange with a new vendor changes our information ecosystem. One area that requires constant attention but often gets little funding is third party risk management. Risk management, especially third-party risk management, involves a constant assessment and re-assessment process to understand what each vendor brings to the business and what risks they potentially introduce. In our information security program, we employ the principles of NIST SP 800-53 to take on third-party risk aggressively.

Our data and our supply chains are our “center[s] of gravity”; not in the astrophysics definition of the term, but in the Carl von Clausewitz (Prussian military theorist during the Napoleonic Wars) usage of the term: “The center of gravity of an armed force refers to those sources of strength or balance. It is that characteristic, capability, or locality from which the force derives its freedom of action, physical strength, or will to fight.” In connected medicine, a center of gravity is the free exchange of information and the information itself. In healthcare delivery, a center of gravity is the supply chain. We identify the defensive measures needed to protect data and our supply chain. We analyze potential criminal activity and assess our connectivity with those who share our data for potential weaknesses.

We prepare and implement our defensive actions and “battlements” based on common military tactics. We evaluate KOCOA: Key Terrain, Observation and fields of fire, Cover and Concealment, Obstacles, and Avenues of Approach. People, endpoints, and infrastructure represent the key terrain that surrounds the data and the supply chain. As we analyze observation and fields of fire, we review what we know about where the data goes when it leaves our facility and what we control we implement to protect it. If gaps exist in our visibility, we look for ways to address them or add them to the risk register. Some gaps may be mitigated by existing controls. We recognize those existing controls (that may or may not be technical) as cover and concealment. We also recognize a well-segmented network as intentional obstacles placed to redirect or stall criminal activity. We continuously assess these controls with a variety of tests. The final step in our defensive preparation is analyzing the avenues of approach.

Cloud security, shared as a responsibility between a cloud provider and cloud consumer, clearly cannot be assumed.

A center of gravity, in its inverse, can often represent a critical vulnerability…That is to say that data and supply chains not well-protected can become the source of an attacker’s strength rather than the defender’s stronghold. Each new connection with our centers of gravity could become the next avenue of approach, if not properly protected. As such, we look at all avenues of approach with an “assumed breach” mentality so that we can then rank all potential avenues of approach to arrive at what we perceive as an “enemy’s most probably course of action”. This requires a well-developed process of threat intelligence evaluation. We look at all threats to our ecosystem and the healthcare industry and assess what’s going on by estimating size, activity, location, unit, time, and equipment (SALUTE report). Thorough analysis of vendor’s proposed architecture as it interacts with our environment, and detailed Socratic questioning of the vendor through dialog, questionnaires, annual re-assessments, and re-assessments whenever there’s a major architectural change. We assess every third party’s publicly available information using a “real-time” third party risk management tool. Of course, vendors can attest to security measures in questionnaires or by submitting evidence of audits. Still, if the attestations don’t match the publicly available data then that generates additional questions. In that case, we start to dig into the details with the vendors to get them to document the exceptions to their policies that we have assessed as risks to our information-sharing agreements.

Connected medicine benefits significantly from so many different “software as a service” subscriptions. As business units enlist cloud services to enhance the speed of delivery of connected medicine, the third-party risk management program must evolve to address OWASP Top 10 and OWASP Mobile App Top 10. Cloud security, shared as a responsibility between a cloud provider and cloud consumer, clearly cannot be assumed. Headlines prove that even companies with proven track records in traditional app sec have failed when migrating to the cloud. To properly guard critical assets, we constantly assess emerging capabilities in healthcare; how the products and services might create new avenues of approach that impact a previously secure sector of our technological ecosystem; and how to engage in meaningful dialog with innovators on how to bake security into their product in ways that redistribute the risk equitably rather than starkly in favor of the vendor.

No third-party risk program, no matter how well funded, will eliminate risk. Risk management and security are processes, not products. Once a device is connected, it is inherently insecure and must undergo constant evaluation for how that connectedness can impact the overall security posture. Only through constant interaction with business associates and equipment providers can we harden our ecosystems with technical, administrative, and compensating controls that provide solid security foundations that limit damage and risk in the event of a compromise.

The post Active Cyber Defenses for Third Party Risk Management appeared first on HealthTech Magazines.

]]>
The Cyber Maginot Line https://www.healthtechmagazines.com/the-cyber-maginot-line/ Mon, 22 Mar 2021 13:04:12 +0000 https://www.healthtechmagazines.com/?p=4742 By Chris Baldwin, System Director (CISO), Hartford Healthcare Maya Angelou, a noted civil rights activist, once said, “hope for the

The post The Cyber Maginot Line appeared first on HealthTech Magazines.

]]>

By Chris Baldwin, System Director (CISO), Hartford Healthcare

Maya Angelou, a noted civil rights activist, once said, “hope for the best, prepare for the worst, and be unsurprised by anything in between.” This is useful thinking in support of effective cybersecurity.

The French Maginot Line

On May 10, 1940, Germany began the invasion of France through the Ardennes Forest in Southeast Belgium. The French believed an attack through this dense, rugged terrain was improbable. They had spent the past nine years and $3b francs constructing a 280-mile fortification called the Maginot Line. The French were aware the Maginot Line might be bypassed, but they did not seriously consider the Ardennes as a plausible alternative.

Many factors made the Ardennes attack successful and applicable to cybersecurity. The blitzkrieg tank tactics, the excellent command, and control and advanced radio communications in the Panzer tanks, the scouting reports of German activity in the Ardennes that was dismissed by the French. But most importantly, the Germans had the intent, resources, and creativity to dominate in a new type of warfare. It took six weeks for the defeat of France, Belgium, Luxembourg, and the Netherlands. Once the German Army penetrated France, allied forces morale quickly deteriorated, command and control broke down. The French were ill-prepared for the eventuality of a successful military penetration.

Maginot Thinking-What is cyber-Maginot thinking?

Many technical tools add great value to an effective cyber defense, which includes next-generation firewalls, advanced end-point antivirus, and state of the art email security platforms. But there is no one-and-done when it comes to cybersecurity. Any strategy predicated on building up defensive safeguards that support the “I am now secure” mindset is dangerous. This thinking assumes threat actors will not continue to adapt their tactics and try again even if they fail the first time. Federal and state security regulations are certainly extensive and important. The Health Insurance Portability and Accountability Act (HIPAA), being the most notable in healthcare, is a solid framework for driving compliance with baseline standards. But compliance is not the same as security, especially with the nefarious motivations and capability of international threat actors prevalent today.

Changing Threat Landscape

Today, it is possible for cyber-criminals and nation-state threat actors to construct effective offensive cyber capability with very modest resources. Building an effective cybersecurity defense program is more challenging. The National Institute of Standards and Technology (NIST) has done a good job in identifying and defining standards for the many elements of cybersecurity defense. The NIST Cyber Security Framework (CSF) is an excellent paradigm for thinking about a security architecture that starts with the right mindset. As defined in NIST CSF, there are five core functions to an effective defense:  Identify, Protect, Detect, Respond, and Recover. The last three tacitly assume an attack will occur, and therefore: 1) the importance of early detection, 2) the need for a flexible and comprehensive incident response process, and 3) that response mitigation and recovery will eventually be needed.

In October 2020, the FBI began issuing warnings that international cybercriminals were targeting the US healthcare system. Within a few weeks, there were reports in the media of ransomware infections at hospitals and health systems around the country. In December 2020, some of the most stalwart security firms, including SolarWinds and FireEye, announced they had been compromised. These are firms corporate America relies upon to stay secure, and yet even they were vulnerable.

According to Mandiant (a division of FireEye), in their 2020 M-Trends report, the global median dwell time, defined as the duration between the start of a cyber-intrusion and it being identified, was 56 days. The more time a threat actor has inside a network, the more time they have to conduct reconnaissance, scan for vulnerabilities, seek to escalate privileges, and gain access to technical and corporate data that could represent an existential risk to almost any organization.

The requisite cyber defenses for every organization will, of course, vary depending upon all the unique characteristics of each entity’s digital footprint. Some may have moved extensive resources to the Cloud. This presents both risk and opportunity. Moving to the Cloud can improve an organization’s security posture because providers such as Amazon Web Services or Microsoft have vastly more resources to apply to cybersecurity. On the other hand, assessing the efficacy of Cloud services before making a move is critical. Not all Cloud providers are alike.

People, processes, and tools are all critically important for effective cybersecurity controls. New job roles, such as threat hunters, are becoming more commonplace. Rapid detection of an intrusion is essential in responding effectively and being able to recover with minimal impact. New skills are required for detection and other state of the art security functions. 

Continuous testing is also critical. When critical weaknesses are found, any patch or remediation should always be tested again to ensure efficacy. Nothing should be taken for granted. For this type of penetration testing, outsourcing may be a viable option, especially if the staff you would rely upon internally to conduct the testing are the same individuals responsible for implementing the technical controls.

Cyber Security Governance

Cybersecurity is highly technical and complex. For many companies, it represents a significant organizational risk. One of the most important safeguards is not technical. Governance and effective risk management are foundational to an effective cybersecurity program. Some important governance questions include:  How are we balancing resources with other competing funding priorities? What levels of cyber liability insurance are appropriate for the organization? 

Effective governance is fundamental to technology adoption in general. Cybersecurity governance is most effective when it supports well-crafted strategies and tactics supported with capital and operating funds over a sustained period of years, along with the mindset that supports flexibility and adaptability in an ever-changing and increasingly dangerous threat landscape.

Those with ill intent have shown they have the resources and creativity to be successful. In cybersecurity, Maginot thinking — a faulty reliance on strategies that do not realistically consider the possibility of compromise, however fortified and well-conceived — is dangerous.

The post The Cyber Maginot Line appeared first on HealthTech Magazines.

]]>
Lacework Cloud Security & Compliance For Health Tech Companies https://www.healthtechmagazines.com/lacework-cloud-security-compliance-for-health-tech-companies/ Wed, 17 Mar 2021 12:52:14 +0000 https://www.healthtechmagazines.com/?p=4727 The COVID-19 pandemic forced a global reset for most organizations in regards to shifting their operations over to a fully

The post Lacework Cloud Security & Compliance For Health Tech Companies appeared first on HealthTech Magazines.

]]>

The COVID-19 pandemic forced a global reset for most organizations in regards to shifting their operations over to a fully remote-working culture. This new wave of change resulted in the rapid adoption of software as a service (SaaS) applications that surpassed all industry predictions and led to a drastic increase in the consumption of cloud services.

“Organizations witnessed their cloud activity volume double overnight. But the most fascinating thing, despite the spike in the volume for our clients, is that their security alert volume remained low,” said Dan Hubbard, CEO of Lacework, a provider of security for today’s cloud generation. “Having an engine that is intelligent enough to tell the differences between normal consumption changes, risky changes and actual threats is essential to operating securely when the unexpected happens. Because scaling that manually just isn’t possible.”

Lacework was built to recognize and understand cloud changes at scale without requiring manual interventions by cybersecurity teams. It’s designed to preempt ransomware and any other security threats. At the same time, Lacework’s solutions are created to give customers the visibility, context and telemetry needed to assess cloud security postures quickly, prove compliance, secure cloud workloads and investigate anomalous activity or answer an auditor’s question, all in one place.

Polygraph: Built for Observability

During its initial days, Lacework recognized cloud security as a data problem and architected its solutions accordingly. Its platform is crafted a bit differently from that of its industry peers and is tailored to each unique environment by leveraging the power of machine learning.

“Building on cloud is the biggest shift the IT industry has ever witnessed. Constant cloud changes require a new approach to security that is critical for our customers to adapt and scale as fast as cloud innovation,” says Hubbard.

Lacework’s renowned offering for cloud security is based on behavior anomaly and rogue API detection. Polygraph is Lacework’s patented, machine learning-based efficacy engine that can capture, organize and monitor security data. Lacework’s foundation is based on this flagship offering that has played a predominant role in their customers’ business. “To build on a cloud environment, clients need to know the difference between normal and threatening changes, and Polygraph, our one-of-a-kind technology, is designed to observe and understand all of those changes over time,” says Hubbard. “The engine helps reduce false alarms, improves detection and self-tunes to monitor compliance effectively.”

“What sets us apart from other cloud security providers is that our platform helps customers consolidate up to four other tools from their current environment. Further, they can receive close to 98% fewer false positives and achieve a nearly 90% reduction in event investigation and research time,” said Hubbard.

The Lacework platform also enables customers to curb security costs while increasing visibility across their cloud and container environments. “As we work with businesses, we are aware that their security teams are responsible for defining functions that different users have in an ever- changing cloud environment. Lacework’s behavior analytics-based platform helps these teams understand and predict those needs,” said Hubbard.

The behavior analytics-based solution is useful in monitoring changes and deviations in containers, workloads and clouds to provide high-fidelity alerts with context when something significant occurs. This enables security professionals, analysts and investigators to quickly detect the users, machines and applications involved in a particular incident or account. The alerts will also help expose entities and their actions involved in incidents. The alert and risk scores related to specific incidents, as well as Polygraph views come in handy for personnel managing their overall cloud security.

In a nutshell, Lackwork’s cloud observability is the factor that helps our customers connect the dots in order to get actionable insights on their cloud data.

Dan Hubbard, CEO of Lacework
Rogue API Detection for Complex Environments

“In addition to detecting the early indicators of compromise that lead to ransomware, Lacework helps recognize rogue API behavior that can cause resource throttling by a CSP, impacting performance and reliability,” says Hubbard.

For example, the Lacework team was preparing a readout of a solution for a client within a complex environment packed with a number of preexisting security tools. Upon deploying Lacework’s security solution, it was easy to see that some of the other tools and applications within that environment were sending excess API calls. This resulted in a high volume of unnecessary cloud traffic which was causing availability problems and risking service outages.

The deep visibility into application behavior provided by Lacework was something that the client hadn’t before experienced. As a result, it gave them insights on how best to modify their systems quickly to eliminate many recurring problems in their environment.

“In a nutshell, Lacework’s cloud observability is the factor that helps our customers connect the dots in order to get actionable insights on their cloud data,” said Hubbard.

A Single Glass Pane for Audit and Compliance

Lacework’s platform also renders a unified view across AWS, GCP and Azure configurations by bringing them into one portal. For the users, this would mean no more logging into disparate tools to evaluate their stance. It becomes a single pane of glass used to audit all cloud platform configurations, and as these configurations change, Lacework can send alerts for compliance. This helps proactively alert the security and compliance teams to resolve issues before any data and cloud resources are compromised.

Additionally, Lacework delivers deep visibility for configurations across our customers’ enterprise cloud accounts and workloads to make sure they comply with all relevant industry, government and institution standards.

“Operating on multiple cloud platforms can increase the threat vector of the overall infrastructure and add complexity to a preexisting challenge. Lacework operates as a comprehensive, centralized solution to identify, analyze and alert on configuration issues,” Hubbard said. “That is why Lacework functions on the philosophy of empowering customers to meet their business goals.”

Another good example where Lacework can help is in the critical field of healthcare, which is coming under increasing attacks in the form of ransomware and other threats. “In healthcare, cloud computing is an IT infrastructure standard right from clinical data sharing and consumer-facing patient portals all the way to the backend mobile application development platforms,” says Hubbard. “This shift echoes the development of electronic health records and big data analytics activities, which multi-cloud strategies make possible throughout the entire health IT infrastructure.”

But as cloud technology’s role becomes even more prominent in the healthcare space, decision-makers need to better understand compliance and security in order to implement their infrastructure. This requires knowing the potential of cloud security to ensure patient security as well as HIPAA and HITECH compliance. The value of healthcare data and the need for 100% uptime from medical devices make the healthcare vertical especially attractive to potential attackers pushing denial of service type attacks like ransomware. Lacework Polygraph can identify the earliest indicators of those attacks for customers, providing context and enabling them to take quick action before their organization or their critical data is compromised.

The Road Ahead

Recently, Lacework announced that it closed a $525 million funding round valuing the company at over $1 billion. The company is expanding its operations as well as its engineering and R&D teams across the U.S. and Europe. This funding will accelerate those efforts.

Lacework is also expanding into adjacent new spaces while keeping its focus on enabling customers to innovate in the cloud with safety and speed. The new investments will allow Lacework to deliver additional integration into the DevOps toolsets and into the security data lake initiatives for Snowflake customers.


The post Lacework Cloud Security & Compliance For Health Tech Companies appeared first on HealthTech Magazines.

]]>
How a California county health system took a quantum leap in disaster preparedness and business continuity through the use of hybrid cloud and network fabric technologies? https://www.healthtechmagazines.com/how-a-california-county-health-system-took-a-quantum-leap-in-disaster-preparedness-and-business-continuity-through-the-use-of-hybrid-cloud-and-network-fabric-technologies/ Tue, 16 Mar 2021 12:08:00 +0000 https://www.healthtechmagazines.com/?p=4714 By Amanpreet Kaur, Information System Analyst IV, San Joaquin General Hospital and Mark Thomas, CIO, San Joaquin General Hospital Introduction

The post How a California county health system took a quantum leap in disaster preparedness and business continuity through the use of hybrid cloud and network fabric technologies? appeared first on HealthTech Magazines.

]]>

By Amanpreet Kaur, Information System Analyst IV, San Joaquin General Hospital and Mark Thomas, CIO, San Joaquin General Hospital
Introduction

Until 2018, San Joaquin General Hospital and County Clinics had a relatively immature digital infrastructure. Its core healthcare processes were documented in paper charts, and numerous isolated systems supported the core administrative functions of patient registration, billing, general ledger, and human resources management. In March of that year, the health system successfully pulled off a ‘big-bang’ implementation of monolithic and integrated Electronic Medical Record and Enterprise Resource Planning systems. While this effort yielded a transformation in the organization’s clinical and business processes, it significantly increased its reliance upon Information Technology.  Extended outages were no longer acceptable and a major disruption in IT services became a high risk to patient care and business operations. With the full support of executive leadership, the Information Systems Department initiated a transformation of the organization’s network infrastructure and disaster recovery capabilities to meet these new availability and resiliency goals.

Hybrid-Cloud

For experienced IT professionals, the very phrase “Disaster Recovery Solution” conjures up thoughts of distant co-location facilities, idle hardware, expensive circuits, a large capital investment, and infrequent and often unsuccessful Disaster Recovery Testing events. Today, however, hybrid-cloud solutions offer organizations like ours the ability to deliver superior functionality with less capital investment and a lower total cost of ownership. Hybrid-cloud solutions extend modern data center capabilities into the cloud and enable the organization to leverage its existing on-premise investment.

Nutanix Xi Leap

Since our data center already used a Nutanix Hyperconverged Infrastructure, we implemented the Nutanix Xi Leap Disaster Recovery as a service (DRaaS) to avoid a large DR capital investment and extend our existing data center’s capabilities into the cloud. We partnered with Nutanix professional services to protect our most crucial business systems, as identified in our Business Impact Analysis (BIA). The entire configuration was done from a single ‘pane of glass’, Prism Central.  Nutanix Xi Leap offered us a variety of recovery plans and protection policies to meet the Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs) established by executive leadership. Xi Leap offered us a variety of service levels to select from, including full network and partial network failover recovery plans.

Nutanix Xi Leap
Network Fabric Technology

Readers who have struggled with complex network architecture challenges will be thrilled to read what network fabric technology allowed us to accomplish with our infrastructure. 

Legacy Network

The health system had a traditional and complex network design which lacked dynamic routing, relied upon spanning tree for loop detection, and required manual provisioning. Network switches were statically interconnected with single fiber paths, virtual networks were provisioned with a rudimentary loop prevention mechanism, and the network design was flat and riddled with single points of failure.

Extreme Fabric

We partnered with Extreme professional services to implement Extreme’s Network Fabric to transform our legacy network infrastructure into a private network cloud. Extreme Fabric technology offered us extremely fast convergence times by allowing all links to be active and simplifying the extension of services across multiple sites. Fully redundant and active LAN and WAN fiber paths transformed our infrastructure, and the flexibility of the fabric allowed us to meet our challenging disaster recovery requirements.

Disaster Recovery Testing  

As many IT professionals know, Disaster Recovery testing events are often horror stories. Millions of dollars may have been spent to create a DR infrastructure that is only tested infrequently and that otherwise sits idle. Lengthy and complex DR runbooks are dusted off and staff members attempt to meet the RTOs and RPOs that their managers have committed to the business. The resulting complexity turns testing events into labor-intensive, exhausting, and thankless ordeals. The flexibility and ease of using Nutanix’s Xi Cloud allowed us to rewrite this tale completely.

Armed with a highly capable infrastructure, we designed disaster recovery procedures for multiple systems, simply and in little time. As one example, we created a DR recovery plan for a pneumatic tube system powered by several network-attached controllers and managed by a central server. P-tube systems are critical healthcare infrastructure that shuttle blood and tissue samples between nursing units and the clinical laboratory. We utilized a full network recovery plan for this system, which allowed us to recover the P-Tube central server in Xi Cloud. After a few readiness checks, we  recovered the central server in the Xi Cloud while maintaining its original IP address. The P-Tube system was operational within minutes without making configuration changes to P-tube controllers. The Xi Leap failback feature allowed us to restore services to the on-premise infrastructure while replicating back all system logs and changes.

Conclusion

After our DR journey, our organization has a network and disaster recovery infrastructure that is simpler to manage but significantly more capable. We are well on the road to meeting all the recovery objectives for our critical business systems. Our executive team has greater confidence in our ability to sustain operations in the event of a catastrophe.

Our journey to meet demanding resiliency and availability goals was at times perplexing and time-consuming, but as Audrey Hepburn once said, “Nothing is impossible, the word itself says ‘I’m possible’!”

The post How a California county health system took a quantum leap in disaster preparedness and business continuity through the use of hybrid cloud and network fabric technologies? appeared first on HealthTech Magazines.

]]>