Category Archives: Tech Thought Leadership

New World, New Threats: Benchmarking the Cyberattack Landscape in 2020

By Rick McElroy, Cybersecurity Strategist, VMware Carbon Blac

The global disruption created by COVID-19 has created a ripple effect across world. As a result, enterprises are facing more cybersecurity pressure than ever before. With a surge in attack volumes, breaches and increased sophistication of techniques, the security landscape is in unprecedented times. As security teams transform to meet these new challenges, the 2020 VMware Carbon Black Global Threat Report highlights the new threats of our new world. 

Amid the global upheaval, security professionals faced new threats and an escalation in attack frequency. With insights from 3,021 CTOs, CIOs and CISOs the VMware Carbon Black Global Threat Reports highlights the impact of COVID-19 and the vulnerabilities it has exposed. The results reinforced much of what we are hearing anecdotally – the threat landscape is getting tougher; third-party vendors are proving a major liability and COVID-19 has considerably intensified security threats.

 

Threat landscape escalates and UK bears the full force

We often talk about what keeps security professionals awake at night but if you’re a security professional in the UK, you are not likely to be getting much sleep at all. The UK is bearing the brunt of escalating threats, with almost all survey respondents saying attacks had grown in volume and a similar percentage saying they were more sophisticated.

Of course, the acid test of the intensity of the threat environment is the number of times attacks succeed. The report found all but two of the 251 UK cybersecurity professionals had suffered at least one breach in the last 12 months. To put this in context, we’ve run this research four times in the UK, and these are the highest figures we’ve ever seen for volumes, sophistication and breach frequency. Proof, if it were still needed, that reliance on network security and perimeter-based defences is not enough; in the case of breaches it’s no longer a matter of if but when.

 

Extended enterprise under threat

Once we accept the inevitability of breaches, we can pivot more effectively to hardening defences against the vectors most likely to cause them. Here the research raised two key areas for focus, each requiring a different plan of action.

First is OS vulnerabilities, an area where poor patching hygiene is unacceptable in today’s environment, yet OS vulnerabilities still led to breaches for 15.5 percent of UK respondents. Firms need to focus on getting on top of patching as a strategic pillar of cyber defence. The key is improving communication between IT operations and SecOps professionals to build an integrated, cross-disciplinary approach.

The second key area of concern is the large partner ecosystems, supply chains and third-party applications that are central to business operations. The UK research showed that island-hopping, in particular, is having a disproportionately large impact, featuring in only six percent of attacks but causing 15 percent of breaches. Add to this the number of breaches caused by third-party applications and supply chain vulnerabilities and you’re looking at more than one-third of all breaches originating in third parties.

What this confirms is that visibility into the corners of the extended ecosystem is essential; if you can’t see it, you can’t fix it. The threats are there, so hunting them out before they lead to breaches is the only way forward. Behavioural analysis of all those interconnected and exposed endpoints will pick up anomalies and show defenders where to look for incidents where attackers are using third parties to gain access to networks and data.

 

COVID-19 surge exposes vulnerabilities

Into this intensive, complex threat environment came COVID-19. The UK lockdown went into effect on 26 March, prompting an overnight transition to home-working for UK office-based businesses and leading to unprecedented pressure for IT operations and security teams tackling productivity, security and business continuity. Confirming the hypothesis that disruption and malicious activity go hand-in-hand, 98 percent of our survey respondents in the UK reported an increase in cyberattacks as a result of more employees working from home, with malware at the top of the list. Increased IoT exposure and phishing attacks were also added to the list of woes.

All this exposed weakness in disaster recovery planning in areas ranging from problems communicating with external parties to managing IT operations. However, the single biggest threat that has emerged in the security arena following the spread of COVID-19 has been the inability to institute multifactor authentication with well over one-quarter of UK respondents saying this has proved a major problem when trying to deliver secure remote access for employees.

 

Building Back Better

Today, perimeter-based defences are ineffective, threats are rising, especially those originating in third parties, and COVID-19 has added to the challenges of overburdened IT operations and security teams. The rapid adaptations that security teams need to make to protect a much more distributed, cloud-based workforce require an approach that makes security intrinsic and enables IT operations and security teams to integrate both strategically and tactically.

As the immediate impact of COVID-19 wanes and the next normal begins to emerge, this is a critical point at which companies must revise their approach the respond to the new threat landscape and the flaws exposed by the stresses of responding to the shift to remote-working.

It’s time to break down the siloes that exist in cybersecurity technologies and approaches and implement an approach that builds security intrinsically across applications, clouds, and devices. This will bring together IT operations and security teams to tackle new threats eliminate blind spots to deliver better visibility and proactively address vulnerabilities before they become breaches or attacks.

COVID-19 has proved a watershed moment in many ways, prompting reflection and a determination to “build back better”. Collaboration will be fundamental to addressing threats, both old and new, in the new world in which we find ourselves.

Are you in control of your partner certification and rebate programs?

Written by Peter Olive, CEO, Vortex 6

As companies move to a new way of working caused by the pandemic, many of the partners that we work with are now reviewing how they structure their organizations to meet customers’ needs in the changed world.  Having worked remotely for five months, many are deliberating what this means, in terms of the shape of their business going forward, and how it will be affected by changing customer needs.

It’s not just about the physical office space and whether that is required in the same way as it was before government stay-at-home orders, but those firms that were forced to quickly digitize all aspects of their business will have new experience that will help them identify where they need to change their practices and where they carry unnecessary risk.

All organizations will be working in a climate where there is no certainty or clarity about the effect the struggling economy will have on their business.

We will all be focussing on making our organizations as resilient as we can and seeing how we can achieve it.

What we have learnt through lockdown is that to do this we need simple to use, scalable and repeatable processes.  We need to automate, so that anyone can pick up and run with a process should employees be off sick, need to self-isolate, are shielding or have been furloughed. As we emerge from the slow-down many organisations may also be looking at a reduction in overheads; another reason to drive this change. In effect, this means that other staff in the business should be able to action tasks even if they only have limited knowledge and training.

Why is this so important?

Business continuity will be the watchword now and going into 2021.  Whether there is a second spike or not, pockets of business disruption and change will be the ‘new normal’ and I believe that failure to automate key parts of your operation could prove to be a significant issue for your business.

Take our Cisco Partners, for example. Due to Covid-19, Partners were given a six-month extension or a waiver period on their expiration dates for active certifications.  This expires on September 16th and without a doubt there will be partners within the Cisco channel who potentially won’t be compliant because they have outstanding requirements that need to be actioned, but they don’t have the visibility to see it coming and establish what they need to do.

Today, most vendors’ rebate programs are based on partners being compliant with their partner program and if they get it wrong,  their rebate revenue could be at risk.  A good example is Cisco’s new VIP 36 program, which came into effect on July 27th.  There are changes in the products attracting rebates between VIP 35 and VIP 36. If you don’t understand the effect on your margin and make the right changes, the result can be a significantly negative impact on your rebates. Here at Vortex 6 our V6 Fusion solution automates the management of Partner Program compliance and certification and optimizes the profitability you can derive from Cisco VIP rebates.  Consequently, we recognize how important it is to not only leverage the new VIP 36 program but also, once the waiver period comes to an end, to ensure you have all the right certifications in place.

Partners failing to manage their program compliance can incur incremental costs including training and management distraction. The greater impact can come from loss of partner status, rebates, specializations, reduced discounts, being placed on get well plans plus any effect it may have on clients as engineers are taken off projects to pass exams.

Right now, it is more important than ever to maintain profitability and that means making sure that you remain compliant with vendor partner programs, otherwise you could be leaving money on the table.  Automating these processes is a sure-fire way of ensuring that you have the visibility to know exactly where you are at any point in time. It makes your business more resilient and less vulnerable to the long-term effects of Covid-19 disruption.

Digital Transformation – Why haven’t all businesses adopted this approach?

Written by Gerry Tombs, CEO, Clearvision

The Global Enterprise Mobility Market size is predicted to reach USD 2804.44 billion by 2030. Fuelling this growth is customer and employee demand for a seamless mobile digital experience in all aspects of their home and working lives. For businesses, it’s a case of adapting or being left in the dust, as they face challenges from disruptive industry entrants and innovators. In the last few months, COVID-19 has simply accelerated those digital transformation plans.

But for those more traditional businesses, forced into lockdown, they hurriedly put in stopgaps or workarounds so that they could enable remote working. But as we come out of lockdown and start to return to a new normal, now the real work begins as many look to complete their digital transformation journey. But what are the factors that have prevented companies from going fully digital until now?

Resistance to change

Digital transformation means just what it says: a radical rethinking of how an organisation uses technology, people and processes to fundamentally change business performance[1]. That’s a daunting prospect for many executive teams in large corporations, despite the promised rewards. While workers at the coalface of the business may be crying out for streamlined mobile business processes and apps that will make them more efficient, the drive for large scale strategic change must come from the top. One of the founders at Intel, Andy Grove, once said: “Bad companies are destroyed by a crisis, good companies survive a crisis but great companies are defined by a crisis.”

This is also supported by the Harvard Business School report ‘Roaring Out of Recession’[2], which highlights that a progressive approach is the most rewarding. This is where you balance offensive and defensive moves. In our case, we cut costs by improving operational performance and not by reducing our workforce. Our offensive moves were also comprehensive; we developed new business opportunities at a time when our competitors were closing their doors, by making significantly greater investments in innovation and partnerships.

It is also essential that the leaders of the business develop the right mindset in a crisis. As Henry Ford once said: “whether you think you can or you can’t, you’re right.” In other words, if you think your business will fail in a recession then it probably will but thinking positively opens up opportunities. It is amazing what a positive mindset can do. If you spend your time listening intently to customers, ideas and opportunities will come, but this is far from a simple process.

Once you spot the transformation the business needs to make, human and financial resources need to be allocated and the whole business lined up in support of the process so that digital transformation is viewed as a strategic investment in the future competitiveness of the company, rather than an expense.

Lack of resources

Fear can also arise from concerns that already overstretched IT departments will struggle to cope with the new demands of application development and rollout. In fact, Gartner fed this particular fear[3] when it predicted that by the end of 2017 the demand for enterprise-grade mobile apps would have grown at least five times faster than the ability of internal IT departments to deliver them. However, in the five years since that prediction was made, the rise in low code development platforms has reduced the burden on IT departments and shortened the time to launch for new apps. Low-code platforms deliver faster time-to-market and as a consequence much faster ROI.

Therefore, this particular fear can now be put to bed. The COVID-19 pandemic has proven just how quickly organisations have moved to adopt a virtual environment when the need necessitates it.

Getting users onboard

Large companies with employees that are used to a slower pace of life can find it hard to adapt to the speed of digital transformation. They can struggle to align vital employee education programmes with the rollout timeframe that can be achieved. It’s no good having a fantastically efficient new system if users are still hankering after the legacy technology – warts and all – and struggling to embrace their new environment. As previously stated, the driver and enthusiasm for change must come from the top.

Therefore, user education is a critical part of the transformation process. We often find that getting people to shift their thinking is one of the greatest hurdles we must overcome. In fact, often our developer community tells us: “even as they are building new applications, users are saying we should try to recreate them just as they were in the old system.”

Organisations need to be able to bring users on the journey with them to discover the efficiency and accessibility of the new applications that can be developed. To be successful in digital transformation, businesses need to invest in the human factor as well as in the technological expertise to realise the full benefits and mitigate resistance. A positive outcome of the lockdown is that users may be a lot more willing to embrace digital and a new way of doing things. The great aspect about low code platforms is that they are very change-oriented, fitting perfectly with agile methodologies, so if an application has been designed and it doesn’t quite fit user requirements it can be very quickly adapted and changed.

Leveraging Legacy Systems

Unlocking information and freeing business processes from legacy IT systems can be one of the biggest stumbling blocks when it comes to digital transformation. However, it is important to recognise the role and importance of such systems and understand why they lack the agility that the digital world now requires. Establishing when legacy systems should be retired and when they should be integrated into existing business processes is a key challenge; evidence suggests that enterprises are mixing it up. A report by VDC research[4] found that 53% of large organisations (organisations with >1000 employees) said that the most common development projects they worked on involved building net-new applications from the ground up; however 43% stated that they were modernising legacy applications. As detailed in the Harvard Business School report, taking a progessive approach during a recession yields the best results.

The efficient solution is to find a way to leverage existing systems without letting them crush the ambition and potential of the future. An advantage of low-code platforms is their ability to extend those core, robust systems with a better user experience, taking advantage of what is there already, and adding agility, innovation and time-to-value. In essence, low-code helps IT to stay aligned with the business, and to deliver at its pace.

As IDC neatly put it “Digital transformation starts with mobility. Organisations with untethered business processes and ubiquitously accessible IT resources will be better positioned to compete and thrive in the digital economy.”[5] Low-code platforms will certainly help IT clear their app development backlog, which is imperative right now – as not only does it create room for innovation, but it will also help to deliver a competitive advantage.

If organisations can address the above challenges, they’ll be well and truly on their way to delivering a highly successful digital transformation programme.

[1] https://www.cio.com/article/3211428/what-is-digital-transformation-a-necessary-disruption.html

[2] https://hbr.org/2010/03/roaring-out-of-recession

[3] https://www.gartner.com/smarterwithgartner/how-to-deliver-enterprise-mobile-apps-faster/

[4] https://www.vdcresearch.com/_documents/briefs/emob/17-Mobile-Development.pdf

[5] https://www.idc.com/getdoc.jsp?containerId=IDC_P420

Disaster Recovery vs. Cloud Backup. What is the difference?

Written by Sarah Doherty, Product Marketing Manager, iland

Data loss and security breaches are becoming increasingly common events in the today’s world. It is not a matter of when, but if a disaster of any kind will happen.  All of an organisation’s information must be protected and readily available at all times in order for a business to survive. Considering this fact, the importance of backups cannot be overestimated. However, while backing up vital data is an integral part of any business’s IT strategy, having backups is not the same as having a disaster recovery plan.  Differentiating backup from disaster recovery can help you develop effective strategies for avoiding the consequences of downtime and business disruptions.

 

Understanding the basics of backup and disaster recovery is critical for minimising the impact of unplanned downtime on your business. Across all industries, organisations recognise that downtime can quickly result in lost sales and revenue, interrupted service, possible supply chain disruptions and loss of reputation due to bad press about an outage. Unfortunately, natural disasters, human error, security breaches and ransomware attacks can all jeopardise the availability of IT resources. Any downtime can disrupt customer interactions, employee productivity, destroy data and freeze business processes.

 

What is backup and disaster recovery?

There’s an important distinction between backup and disaster recovery. Backup is the process of making an extra copy (or multiple copies) of data. You back up data to protect it. You might need to restore backup data if you encounter an accidental deletion, database corruption, or problem with a software upgrade.  It is important to have a backup solution in place. Backup protects your data in case of theft (a single laptop to office break-ins), employee accidents (deletion of an important file), or a technical issue (crashed hard drive). With this protection, you can access a copy of your data and restore it easily.

 

Disaster recovery, on the other hand, refers to the plan and processes for quickly reestablishing access to applications, data, and IT resources after an outage. That plan might involve switching over to a redundant set of servers and storage systems until your primary data center is functional again.  Don’t get caught up on the term “disaster” and believe it has to be a major incident. A disaster can be your entire network crashes and your employees can no longer work for the day (or longer). With a disaster recovery plan, your employees can continue to work by using the mirrored system. With your employees set, your IT works on fixing the problem with the original network.  Having an inadequate DR plan can negatively impact your organisation leading to interrupted service, lost sales and revenue, high costs, potential supply chain disruptions along with possible loss of reputation due to the bad press around an outage.

 

Some organisations mistake backup for disaster recovery. But as they may discover after a serious outage, simply having copies of data doesn’t mean you can keep your business running. To ensure business continuity, you need a robust, tested disaster recovery plan.

 

What are the key differences between Backup and Disaster Recovery?  

If backup and disaster recovery are compared, there are several distinct differences that exist between the two:

 

  • Different Purposes. Backups work best when you need to gain access to a lost or damaged file or object, such as an e-mail or a PowerPoint presentation. Backups are often used for long-term data archival, or for purposes such as data retention. However, if you want your business to quickly restore its functions after some unforeseen event, you should opt for disaster recovery. With both the DR site and DR solution in place, you can simply perform failover to transfer workloads to the VM replicas at the DR location, and your business can continue to function as normal even if the production site is unavailable.
  • Distinct RTO and RPO. Setting up Recovery Time Objective (RTO) and Recovery Point Objective (RPO) is crucial for any business. Backups have longer RTOs and RPOs and thus are not suitable for business-critical data that you need quickly restored after a disaster. Disaster recovery, on the other hand, implies replicating your critical VMs with the aim of quickly performing failover if necessary, which means that DR can accommodate much shorter RTOs and RPOs.
  • Resource Allocation. Backups are usually stored in a compressed state and do not require much attention or storage space. Disaster recovery, on the other hand, requires a separate site with fully operational IT infrastructure that should always be ready for possible failover at any time.
  • Comprehensive planning. The backup process is rarely complicated: an organisation simply needs to create and stick to their Recovery Point Objectives as well as requirements for data retention. With disaster recovery, things immediately become more complicated. Besides the need for the additional resources, a business needs to evaluate the importance of business applications and prioritise the recovery order of the VMs running such applications.

 

Your organisation cannot afford to neglect backup or disaster recovery. If it takes hours to retrieve lost data after an accidental deletion, your employees, customers or partners will not have access to vital data prohibiting them from completing business critical processes that rely on your technology. And if it takes days to get your business back online after a disaster, you may permanently lose customers and business revenue.  Given the amount of time and money you could lose in both cases, investments in backup and disaster recovery are completely justified.

Don’t wait for disaster to happen.  For most organisations, backup and disaster recovery strategies are absolutely critical to maintain the future of the business.  Organisations must address IT recovery by creating a comprehensive solution that encompasses people, process and technology.   iland Secure Cloud solutions can help you evaluate and update your strategies, which can help you control complexity and cost.  Backup and disaster recovery plans can help only if they are designed, deployed and tested long before they are needed.  iland offers cost effective, scalable and secure cloud backup and disaster recovery as a service for all of your business essential data.

 

Why Apps Must WOW Users

Written by Gerry Tombs, CEO, Clearvision

Mobile apps are an essential part of our daily lives – be it for news, weather, health, fitness, travel, directions – you name it from the moment we wake up to the moment we go to bed, apps form part of our everyday routine and we rely heavily on them. But this really highlights the challenge that businesses face in developing compelling and engaging apps that users want to interact with. Fundamentally, for an app to succeed, it must act as an extension of the user, augmenting their lives, solving their problems in new ways, and delivering such a seamlessly brilliant experience that they turn to it repeatedly until it becomes valued as an essential part of their lives.

This kind of user engagement is commercial gold and is what companies are seeking as they compete to gain customer loyalty and streamline business processes. So why is experience so critical in today’s market and how can businesses deliver enterprise grade apps that have the experience users are looking for?

Millennials and Generation Z

By 2025, millennials will make up 75 percent of the active working population, with Gen Z hot on their heels[1]. These true digital natives have matured in an always-on, instant access mobile culture. Not for them queuing at the post office or the bank, or indeed, queuing anywhere. Whether in a role as consumers, or as employees, they demand that apps live up to their high expectations before they become long-term members of their digital application ecosystems.

They build that ecosystem based on companies that offer the service that fits in with their omnichannel lives, enabling them to multitask life-changing decisions such as getting a mortgage with the mundanity of grocery shopping. And if one brand can handle both, even better!

Customers and employees are looking for brilliant experiences and loyalty is quickly transferred to the app that can deliver “wow” moments. This makes creating slick, smart, and creative apps mission-critical to businesses that want to compete for digital mindshare and break into the trusted application ecosystem of the user.

An app that fails to meet the user’s expectations is just that: a failure. Organisations risk wasting valuable investment and development time if they can’t get into the experience mindset that creates apps that truly are an extension of the user.

Getting into the mindset of your users

So, how do you get into that mindset?

A really good example is something that Zurich UK did, using the technology of our low-code partner, Mendix. Zurich wanted to investigate new ways to promote life insurance adoption across the UK market, and recognised that people were put off from buying life cover because they thought it was unaffordable and perhaps a luxury. To challenge that view, Zurich developed a selfie app. They came up with the novel idea of FaceQuote, a frictionless, first-to-market application that provides prospective life insurance customers by soliciting a selfie, estimating the user’s age, and calculating a monthly premium based on this estimated age.

The FaceQuote App consists of only two pages, providing a simple, nearly instantaneous idea of what life insurance would cost a typical customer. The user’s selfie is sent by FaceQuote to an image processing intelligence that provides an educated guess of the user’s age with just one click. The estimated age then drives an estimate of a life insurance premium for the user.

I give this example because it is the very definition of a brilliant digital experience, in this case made possible by low-code – it’s an app that works perfectly for the end user and helps the organisation meet an important business goal.

Apps that work anytime, anywhere, any environment

Although we develop apps in solid network environments with full connectivity, the real world just isn’t like that. An app that is truly appreciated by a user will get spun up in any situation, and it needs to deliver value no matter how bad the connection or even when there’s no connection at all. Here at Clearvision we understand that if we can’t provide users with a fully functional app at any time or at least a clever and fun way to interact with the app while offline or when experiencing network limitations, we aren’t delivering well. And that’s why I think that more than ever, user experience experts and mobile developers must be aware of how important offline synchronisation and network management are in achieving the best technical solution without compromising a great user experience.

Put simply, the experience needs to deliver, even when the network doesn’t. That’s why testing in all kinds of less-than-ideal scenarios and prioritising mobile design and functionality at every stage are key. It’s the only way the app will win its way into the user ecosystem.

Where thinkers become makers

When we’re talking about app development, the usual tale is of speed: rushing headlong to get apps out the door under pressure from business units or competitors. However, when trying to deliver a brilliant experience, it is well worth regularly taking the time to step back and check that you’re still heading in the right direction. That’s the beauty of low code – it actually frees up time to make sure that the app is delivering on criteria such as usability, creativity, and those “wow” moments. Because technical debt is lower with low code, there’s no fear of change; on the contrary, continuous change is positive and prolongs the lifespan of that app. We keep adding value to the app instead of retiring it, so those previous investments are never lost.

In today’s world, user experience is a key battlefield for businesses trying to win hearts and minds. Earning a place in the user’s app ecosystem is the ultimate prize, so the pressure is on to deliver experiences so brilliant that they unquestionably enhance the user’s life. Understanding user stories and how they can be changed, delivering a seamless mobile experience and taking time to create an intuitive, innovative product are all cornerstones of delivering brilliant digital experiences that permit the app to become “part of us.”.

[1] https://www.accenture.com/nl-en/blogs/insights/the-secret-to-boosting-employee-engagement-for-millennials-and-gen-z

What does the enforcement of CCPA mean for businesses?

Written by Martin Sugden, CEO Boldon James, a HelpSystems company 

Last month saw the California Consumer Privacy Act (CCPA) enter the enforcement phase on 1st July, despite lobbying from some business groups to delay it, with many stating that owing to the impact of COVID-19, they wouldn’t be able to dedicate the manpower, resources and time to CCPA in order to prepare for it.

The implementation means that California’s Attorney General (AG) will be able to take direct action against businesses that violate the privacy protection requirements of the CCPA. The law has been in effect since 1st January 2020, but until now enforcement was limited to civil actions brought by consumers against violators.

Over the last few months, the AG’s office has been busy finalising how to assess penalties, how to define a breach and how to justify the size of a fine levied for violating the CCPA. Already, the extent to which businesses are concerned about meeting these new regulations is evidenced by the calls to delay the start of enforcement. However, California’s Attorney General Xavier Becerra was unmoved on the timing, stating that enforcement of the regulation would commence as planned and saying: “We encourage businesses to be particularly mindful of data security in this time of emergency.”

For those less familiar, the CCPA is a state-wide data privacy law that regulates how businesses all over the world can handle the personal information (PI) of California residents.  It is the US (Californian) counterpart of the European General Data Protection Regulation (GDPR) which came into force in May 2018.  However, the difference between GDPR and CCPA is that the CCPA’s definition is extra-personal, meaning that it includes data that is not specific to an individual, but is categorised as household data, whereas the GDPR remains exclusively individual.

Not long ago organisations operated closed systems, with most data processing taking place in their own environment and the ability to communicate directly with the outside world limited to email and telephone. The data protection laws in place then were benign, with only repeat or very serious offenders receiving a fine. The data protection landscape and its associated compliance environment changed fundamentally with the implementation of the GDPR, with many other privacy regulations following suit around the globe. California is the first US state to address the issue, however, Singapore, India and many other large economies have already published GDPR equivalents each with their own local flavour.

Now that CCPA is in force, it will be interesting to see what size of fines and types of action will be issued. It was about a year after the launch of GDPR, that the first fines were issued by the ICO and they left no one in any doubt that this regulation has teeth. Record financial penalties for organisations such as Google, Facebook, Marriott and British Airways were a salutary lesson to businesses across the board that they cannot afford to fail against these regulations. Increasing public awareness of privacy rights means the damage is not just financial, but reputational too, a factor that is infinitely more difficult to measure, but can be catastrophic and long-lasting.

The tone from the various regulatory bodies’ communications around COVID-19 indicates that businesses cannot afford to take their eye off the data protection ball, even during these challenging times and California having gone ahead of the other states is clearly serious about data protetction.

When it comes to privacy, most countries have aligned to the standard of GDPR with some appropriate domestic legislation incorporated, such as I’ve indicated above with regard to CCPA. Therefore, I would say that if organisations work to incorporate GDPR requirements – including the mandate to ensure data protection by design and default – into their compliance regime, they won’t go far wrong.

So how do you comply and get some value for your organisation? While compliance with data protection regulations is non-negotiable and the penalties for failure are severe, it is a mistake to see compliance solely as an inevitable burden. With an intelligent and proactive approach, organisations can pivot from viewing compliance only as an expense and turn it into a positive competitive differentiator and one that, over the long term, will deliver efficiencies and cost reductions.

With this in mind, what steps should organisations take to sensibly adopt a better data protection posture and with it, build a firm foundation towards onward compliance? This is where data classification is a robust and critical first step in any compliance and data protection strategy.   Data classification is defined as a tool for the categorisation of data to enable organisations to effectively answer questions around what data types are available and where and how certain data is located, shared, and used.  Here at Boldon James we have been helping organisations for over 35 years put in place the right data classification and secure messaging, to meet their compliance objectives.  Therefore, as CCPA is now in force, I thought it would be helpful to share a few pointers to home in on when looking at data classification and your compliance strategy:

  • IT security and operations do not own business data – so do not look to the CISO for all the answers, his job is to help you, not do your job.
  • Identify and engage stakeholders right across the business, including risk, legal, and compliance. This is critical to the success of your compliance programme.
  • Data stewardship will correctly align to regulations only when the data owners are identified and engaged.
  • Organisations must educate users about the sensitivity of data and ensure the appropriate controls are in place around confidential and sensitive information.
  • Alert users in real time that their actions may involve risk, for example, when data is leaving the organisation to warn them before sending messages that contain sensitive information. Allowing an automated gateway to put it in a queue slows the business down and helps no one.
  • The first step is the need to classify or label data with visual labels to highlight any specific handling requirements.
  • Then, secondly, ensure metadata labels can be read by other security tools to enforce security controls to stop unauthorised distribution of data.
  • Link data classification tools to solutions such as DLP, encryption, access control and rights management to enhance overall data protection.
  • Make sure you provide critical audit information on classification events to enable remediation activity and determine your compliance position to the regulatory authorities.

It will be interesting to see how CCPA is adopted and how draconian the first few fines are.  Hopefully, the pointers I’ve outlined above will set you on the right path and keep your business out of the headlines.  If you are interested in finding out more about how data classification can help, why not download our whitepaper Classification By Design: The Foundation of Effective Data Protection Compliance.

Defeat Emotet Malware with SSL Interception – No Mask Needed

Written by Adrian Taylor, Regional VP of Sales for A10 Networks  

The Emotet trojan recently turned from a major cybersecurity threat to a laughing stock when its payloads were replaced by harmless animated GIFs. Taking advantage of a weakness in the way Emotet malware components were stored, white-hat hackers donned their vigilante masks and sabotaged the operations of the recently revived cyberthreat. While highly effective as well as somewhat humorous, the incident should not distract attention from two unavoidable truths. 

First, while the prank deactivated about a quarter of all Emotet malware payload downloads, the botnet remains a very real, ongoing threat and a prime vector for attacks such as ransomware. And second, relying on one-off operations by whimsical vigilantes is hardly a sustainable security strategy. To keep the remaining active Emotet botnets—and countless other cyberthreats—out of their environment, organisations need to rely on more robust and reliable measures based on SSL interception (SSL inspection) and SSL decryption

History of Emotet and the threat it presents 

First identified in 2014, version one of Emotet was designed to steal bank account details by intercepting internet traffic. A short time after, a new version of the software was detected. This version, dubbed Emotet version two, came packaged with several modules, including a money transfer system, malspam module, and a banking module that targeted German and Austrian banks. Last year, we saw reports of a botnet-driven spam campaign targeting German, Polish, Italian, and English victims with craftily worded subject lines like “Payment Remittance Advice” and “Overdue Invoice.” Opening the infected Microsoft Word document initiates a macro, which in turn downloads Emotet from compromised WordPress sites. 

After a relative quiet start to 2020, the Emotet trojan resurfaced suddenly with a surge of activity in mid-July. This time around, the botnet’s reign of terror took an unexpected turn when the payloads its operators had stored on – poorly secured WordPress sites – were replaced with a series of popular GIFs. Instead of being alerted of a successful cyberattack, the respective targets received nothing more alarming than an image of Blink 182James Franco, or Hackerman

Whilst this is all in good fun, the question remains: what if the white hats had left their masks in the drawer instead of taking on the Emotet trojan? And what about the countless other malware attacks that continue unimpeded, delivering their payloads as intended? 

A view into the encryption blind spot with SSL interception (SSL inspection) 

Malware attacks such as Emotet often take advantage of a fundamental flaw in internet security. To protect data, most companies routinely rely on SSL encryption or TLS encryption. This practice is highly effective for preventing spoofing, man-in-the-middle attacks, and other common exploits from compromising data security and privacy. Unfortunately, it also creates an ideal hiding place for hackers. To security devices inspecting inbound communications for threats, encrypted traffic appears as gibberish—including malware. In fact, more than half of the malware attacks seen today are using some form of encryption. As a result, the SSL encryption blind spot ends up being a major hole in the organisation’s defence strategy. 

The most obvious way to address this problem would be to decrypt traffic as it arrives to enable SSL inspection before passing it along to its destination within the organisation—an approach known as SSL interception. But here too, problems arise. For one thing, some types of data are not allowed to be decrypted, such as the records of medical patients governed by privacy standards like HIPAA, making across-the-board SSL decryption unsuitable. And for any kind of traffic, SSL decryption can greatly degrade the performance of security devices while increasing network latency, bottlenecks, cost, and complexity. Multiply these impacts by the number of components in the typical enterprise security stack—DLP, antivirus, firewall, IPS, and IDS—and the problem becomes clear. 

How efficient SSL inspection saves the day 

With many organisations relying on distributed per-hop SSL decryption. A single SSL inspection solution can provide the best course of action by decrypting traffic across all TCP ports and advanced protocols like SSH, STARTTLS, XMPP, SMTP and POP3. Also, this solution helps provide network traffic visibility to all security devices, including inline, out-of-band and ICAP-enabled devices. 

Whilst we should celebrate the work of the white hats who restrained Emotet, it is not every day that a lethal cyberthreat becomes a matter of humour. But having had a good laugh at their expense, we should turn our attention to making sure that attacks like Emotet have no way to succeed in the future—without the need to count on vigilante justice  – this is where SSL inspection can really save the day.  

How AI technology is transforming talent acquisition

Written by Nicolas Speeckaert, skeeled’s Co-Founder and Managing Partner

When the coronavirus pandemic hit in March, recruitment was halted in many companies. However, with UK employees being urged back to the office from 1st August, we are also seeing hiring intentions starting to change.

The latest figures from the Recruitment and Employment Confederation’s (REC) JobsOutlook survey suggests that employers are becoming more confident about hiring new staff and their ability to make investment decisions has improved for the first time since February[i].

The REC found that organisations planning to hire permanent staff within the next three months increased from a net +6 in early June to +14, while demand for staff in the next four to 12 months remained at net +12. However, with many people still working at home some or all the time there is likely to be major changes to how companies attract talent in the future.

 

Embracing technology

The pandemic has highlighted to CEOs how technology can support new ways of working. During lockdown, remote workers were increasingly using tools such as Zoom and Microsoft Teams for meetings and to engage and connect with their colleagues.

Recruitment also adapted and moved into the virtual world. Recruitment outsourcing company Cielo published a report in June entitled, ‘The Future of Work Survey[ii],’ which found that two thirds of businesses had successfully interviewed and onboarded new starters during the Covid-19 outbreak.

The report highlighted that most employers are now comfortable using technology for talent acquisition, with 82% of hiring managers saying they will continue to interview using video once the pandemic is over. Two fifths (41%) are happy to onboard staff virtually and 32% are not concerned about making job offers without meeting candidates face to face.

66% said their reliance on advanced technology will remain so for CEOs now keen to expand their teams, ‘virtual’ talent acquisition is likely to become the new normal.

Technology can be used to manage the entire recruitment process and help companies to screen, select and appoint staff.  It can streamline and automate processes including helping companies manage the high numbers of job applications which will be likely in the coming months, with increased unemployment, resulting from the impact of the coronavirus on companies

Managing a flood of job applicants could be a real headache for CEOs. Sifting through high volumes of CVs and applications and selecting the right talent can be arduous, time consuming and costly.

However, by embracing digital recruitment solutions, employers can automate many of the repetitive and time-consuming processes involved in talent acquisition, particularly in the early stages of recruitment in attracting, sifting, and selecting candidates for interviews.

New AI recruitment technology

In recent months, we have seen demand for virtual talent acquisition solutions accelerate.  To address this demand, we recently launched our predictive AI talent management software in the UK to support employers with their talent acquisition. The key benefits of skeeled is the fact it is powered by innovative predictive AI technology which automates and screens talent to ensure it matches the job brief.

The software’s pre-screening tool, which is powered by machine learning and Natural Language Programme (NLP), accurately sifts through CVs and select candidates that are the best match and fit with the job description. The system produces an automated binary decision: that either rejects applicants clearly not a fit or puts them forward for further analysis. This technology saves companies hours of time and money screening candidates and ensures a completely fair and objective selection process.

Some CEOs may have concerns that using technology in this way will remove the human touch from the hiring process, but this is not the case. While the screening decisions are automated, these can be easily monitored and revised, and employers or recruitment teams can requalify applicants they might consider a fit.

During the pandemic employees have become accustomed to regular video calls. skeeled also incorporates video in its pre-screening processes and employers can invite candidates take part in a pre-recorded video interview to answer some initial questions.

These video interviews give employers a far more holistic view of the applicants so they can check if individuals are a good fit before progressing to a face to face interview, which also saves both parties a great deal of time. For businesses operating globally these videos also enable companies to share candidate information with colleagues and teams in different countries or regions, so they can really collaborate and ensure that collective, objective, and fair hiring decisions are made. Candidates can also undertake assessments that test their motivation and skills using the system.

An Applicant Tracking System is also included which enables interviewers, recruiters, and CEOs to collaborate and communicate throughout the hiring process and follow up effectively with applicants in a timely manner which helps to ensure a good candidate experience.

As companies start to navigate the post Covid-19 world selecting and acquiring the best talent is going to be key to their recovery. Busy CEOs are increasingly going to be making use of innovative recruitment solutions to transform talent acquisition, saving them time and money.

 


[i] https://www.personneltoday.com/hr/employers-feeling-more-confident-about-ability-to-hire/

[ii] https://www.recruitment-international.co.uk/blog/2020/06/59-percent-expect-tech-and-remote-working-to-lead-to-more-effective-recruitment

The Key Benefits for High Availability Load Balancing

Anthony Webb, Vice President Sales EMEA, A10 Networks, considers why load balancing is a growing and important market, as businesses increasingly need to ensure high availability for mission-critical applications as network demand grows

The load balancer market is expected to grow to £4.7billion by 2023, and trends such as mobile broadband, multi-cloud and hybrid cloud, virtualisation, remote working, and bring your own device (BYOD) have helped to fuel this growth. The result is that tremendous pressure is being placed on IT departments to ensure high availability for mission-critical applications such as ERP, communication and collaboration systems, and virtual desktop infrastructure (VDI).

The need for high availability

High availability, which is the ability of a system or system component to be continuously operational for a desirably long period of time, can help IT departments implement an architecture that uses redundancy and fault tolerance to enable continuous operation and fast disaster recovery. This is true for every element of the data centre—from high availability for applications to high availability for the load balancer or application delivery controller (ADC) that manages network traffic within and across the data centres in an environment.

High availability begins with identifying and eliminating single points of failure in the infrastructure that might trigger a service interruption—for example, by deploying redundant components to provide fault tolerance in the event that one of the devices fails. Load balancing, whether provided through a standalone device or as a feature of an ADC, facilitates this process by performing health checks on servers, detecting potential failures, and redirecting traffic as needed to ensure uninterrupted service.

While ensuring fault tolerance for servers is obviously critical, a high availability architecture must also consider the load balancing layer itself. If this becomes unable to perform its function effectively, the servers below run the risk of overflow, potentially compromising their own health as well as application performance and application availability. This makes redundancy just as important for the load balancer or ADC as for any other component in the data centre.

As with a high availability server cluster, there are several ways in which load balancers or ADCs can be deployed to provide high availability, including:

  • Active-standby – The most common configuration, the active-standby model includes a fully redundant instance of each ADC which is brought online only in the event that its primary node fails. Each active ADC can be configured differently, though each active-standby pair will share the same configuration.
  • Active-active – In this model, multiple similarly configured ADCs are deployed for routine use. In the event that one node fails, its traffic is taken over by one or more of the remaining nodes and load balanced as needed to ensure consistent service. This approach assumes that there will be sufficient capacity available across the cluster for it to function even when one ADC is unavailable.
  • N+1 – Providing redundancy at a lower cost than active-standby, an N+1 configuration includes one or more extra ADCs that can be brought online in the event that any of the primary ADCs fails.

In each case, rapid failover enables fault tolerance and disaster recovery for the load balancing function so that application performance and application availability are not affected by the failure. Failover and traffic management is typically managed through a version of the Virtual Router Redundancy Protocol redundancy standard.

Key high availability features for load balancing or ADC

In addition to ensuring high availability for your ADC, you should also make sure that your ADC provides high availability for the applications whose traffic it manages. In the event that a server fails, the ADC can reroute traffic to another available server in the cluster. Key features that enable this function include:

  • Load balancing methods – There are several methods that can be used for server selection, including round robin, least connections, weighted round robin, weighted least connections, fastest response, and more. Your ADC should offer all these options to allow the most suitable configuration for your environment and priorities.
  • Health monitoring – To ensure rapid failover with little or no downtime, server health should be continuously assessed based on a number of indicators, including:
    • Time series of total bytes in and out from each server
    • Time series of traffic rates (in Mbps) in and out from each server
    • Percent of error traffic over range
    • Number of good SSL connections
    • Average application server latency by service
    • Client-side latency SRTT, max, min, and average as a time series
    • Custom health checks such as measuring the response time for specific SQL database queries

Why this is so critical?

As enterprises become further dependent on the Internet to get business done, the threat of downtime can become a competitive disadvantage. With downtime estimated to cause losses of around £780,000 per week for a company with roughly 10,000 employees, the direct losses are substantial and a primary reason why businesses need to establish a high availability solution. Apart from the direct cost of downtime we also see business continuity, in terms of reputation and data loss, as another factor encouraging businesses to ensure high availability is implemented. Firstly, reputation will improve as the business and brand is known for its reliability versus its competitors. Secondly, reducing risk of data loss is essential as due to the severe penalties incurred under the terms of the GDPR. A highly available infrastructure also mitigates the negative impact of outages to revenue and productivity.

How Can SMEs Monitor Performance, Manage Costs, and Secure Profit?

Written By Sascha Giese, Head Geek, SolarWinds

Around the world, organisations are adapting—rapidly—to a cascade of factors designed to keep businesses running and keep employees safe. Supporting newly remote workforces are raising the stakes for IT organisations to keep applications up, networks humming, and environments performing optimally—all while ensuring the security of an expanded attack surface.

At the same time, organisations are re-evaluating budgets to weather an economic downturn. Of course, IT pros have always been under budget pressure, and doing more with less is the name of their game. But the challenges created by the COVID-19 pandemic have amplified those factors, increasing the need for products that aren’t just easy to use, but easy to buy and can fit the needs of any budget.

Amid these macroeconomic headwinds, small- to medium-sized businesses are among the hardest hit. A recent study estimated the coronavirus could be costing UK SMBs as much as £69 billion, putting two-fifths at risk of permanently closing. In our increasingly virtual world, technology will be pivotal for SMBs to fuel business success and maintain organisational stability.

The “new normal” means businesses need to have visibility across on-premises to private cloud, public cloud, and multi-cloud—and user monitoring to home offices, couches, and kitchen tables. It complicates management needs across physical networks, infrastructure, and packaged apps sitting behind the firewall, and adds software-defined networks, internet traffic, dynamic infrastructure, edge computing, and custom apps, not to mention aligning people and processes with the technology to keep it all humming.

Small businesses can find themselves falling into one of “too” camps: too little, too late; or too much, too soon.

 

Too Little, Too Late

As data breaches become more frequent and privacy concerns soar, security has already been elevated on the business agenda. The SolarWinds® IT Trends Report 2020: The Universal Language of IT revealed two-fifths of tech pros in small businesses (43%) state 10 – 24% of their daily responsibilities/tasks includes IT security management. And when asked which area of security skills and management their organisation is prioritising for development (prior to Covid-19), six in 10 tech pros within small businesses named network security.

The truth is, you can’t ignore any part of your IT operations management world. Whether it’s your applications, servers, data, infrastructure, or networks across your hybrid or multi-cloud environment—it all matters to your business’ survival… and to its success.

However, the same study found a quarter of tech pros within small businesses state they don’t use any technical approaches to gain visibility into adopted cloud or SaaS technologies (e.g., network traffic analysis/network applications analysis, log analytics, tracing, etc.). Forty-one percent of those surveyed weren’t even sure which of their existing tools (if any) provided them with visibility of cloud and SaaS technologies. This is even more concerning when you consider 49% of small and 58% of medium business IT pros are spending more time managing apps and service delivery than infrastructure and hardware.

But it’s easy to become complacent because 99% of the time, things will run smoothly. And even if they don’t, the consequences are rarely perilous. The real dangers only become apparent when it’s too late—when a threat goes undetected and infects an entire organisation.

 

Too Much, Too Soon

But let’s not forget the whole other 75% who are using technical approaches to monitor adopted cloud or SaaS technologies. The challenge for this group is deploying their time and resources effectively. Even before COVID-19, 55% tech pros within small businesses and 46% within mid-sized businesses had named lack of budget and resources as their single biggest barrier to successfully supporting their organisation.

Too often, years of putting out IT fires with point solutions makes the looming inferno hard to see through all the smoke. And when you have to expand skillsets without training to match, you need tools to be easier to use—not harder. Although there’s now a proliferation of tools available to support IT with operations management, many of these create more complexity than they cure. Integrations remain difficult, functionality is limited, and the user experience is far from intuitive.

Small businesses, in particular, who have to deliver the same value with fewer resources, can fall into the trap of thinking a greater number of tools or a wide-ranging set of functionalities (designed for large enterprises) will arm with them additional firepower. The reality is, it’s too much, and often too soon. More dashboards don’t mean better visibility—they’re an extra drain on precious bandwidth that distract from core business priorities. When you can see more in one connected view, you and your teams can do more with less, cover your bases, and deliver even more for the business.

 

Securing Profit

Securing data and proving compliance wasn’t always your job—but it is now. And unless you have a specialised security team, it’s up to you to keep your systems compliant and the data driving your business secure—while doing your day job. So, how can tech pros create a remote monitoring strategy that ensures optimum performance and maximum efficiency?

It starts with an audit. An audit of potential threats. An audit of existing tools. And an audit of current practices. This will need to factor in the new IT environment post-COVID whereby the security perimeter of an organisation lays far beyond the confines of the office. Consider the applications your teams require, the devices they may be working from and the sensitivity of company data to define access levels. This will need to be constantly reviewed and updated, so tech pros will need to approach this systematically to ensure it remains accurate and fit for purpose.

IT should work backwards from business needs to define the tools they require. Assess each subscription and license to understand the value your organization derives from it rather than the functionality it provides. This is a change to consolidate, saving you and your organisation time and money—and an opportunity to present a direct impact on profitability back to the wider business at a time it needs it the most.

 

Final Thoughts

Working in IT, your role has always been business critical. Keeping applications, infrastructures, and networks up and running, performing at optimal speed—and secure and compliant—is the engine running your digital organisation. But the need to drive growth—while keeping a newly remote workforce productive and available—has intensified the complexity of today’s IT operations management and financial challenges in a way few IT pros and business leaders, if any, have ever seen.

IT pros within small businesses are feeling the impact of this new environment most acutely. They need to be able to protect and defend their data and systems—and keep compliant—just the same as their enterprise counterparts but with a fraction of the resources. It’s a tough balance to strike—businesses run the risk of either not having enough protection in place or having on an abundance of tools, of which none may provide the level of insight required. With the initial chaos underway and many businesses returning to some semblance of “normal” as lockdown restrictions lift, this is a good time for tech pros to take stock and rethink the systems that have delivered the most value.