Category Archives: Tech Thought Leadership

James Costanzo: How secure is your data pipeline?

Written by James Costanzo, Content Strategist at iland

As you may have heard, the latest high-profile case of cybercrime hit the headlines recently — and the results have not been pretty.

On Friday, May 7, Colonial Pipeline Co., which operates the 5,500-mile network of pipes responsible for roughly 45 percent of the gasoline and diesel fuel consumed on the East Coast, was forced to close following a ransomware attack. It took five days for the company to begin restarting operations, and even then, fully restoring the flow of fuel will not be immediate.

The impact has been felt nationwide, with frenzied runs on fuel resulting in long lines and shortages up and down the East Coast, surging gas prices, and volatility across the energy market. It even prompted an emergency response from the Biden administration, which addressed the growing threat of ransomware by name.

While the specifics of the attack, and the true extent of its damage, are still being sorted, we can say this for certain: It succeeded in putting a world-wide spot line on cybercrime, exposing everyone, not just those in IT, to our vulnerability when it comes to threats like ransomware.

 

(Cyber)crime Wave

If it feels like cybercrime is becoming more and more prevalent, that’s because it is. Beyond highly publicized examples, like the attacks on Colonial Pipeline and SolarWinds, a simple Google search reveals just how widespread the problem has become.

In part because of the COVID-19 pandemic and sudden increase in remote work, 2020 was a cybercrime record breaker. According to Forbes, we’ve never seen the sheer numbers of attacks on companies, government, and individuals or had more data lost in breaches than in the last year. The numbers paint a bleak picture.

Malware increased by 358 percent in 2020. Not to be outdone, ransomware kept pace as the fastest growing type of cybercrime. One in five Americans have been the victim of ransomware with one new victim being added to their ranks every 10 seconds. Making matters worse, the average cost of a data breach rose to $3.86 million. Meanwhile, 80 percent of senior IT and IT security leaders believe their organisations lack sufficient protection against cyberattacks. All totalled, cybercrime is expected to cost the world approximately $10.5 trillion annually by 2025.

 

Shall we continue?

In the wake of the Colonial Pipeline attack, a top Biden administration cybersecurity official warned against the now-obvious — that cyberattacks were “growing more sophisticated, frequent, and aggressive.” We’ve been saying that very thing at iland for quite some time. The good news is, we also know how to help.

 

Securing Your Data Pipeline

Given the circumstances, it’s a tad ironic that fuel has often been used as an analogy for data — in that both use pipelines. Today, we use a vast network of digital pipelines for our data, but many companies do so without the proper protections in place. The increase in cybercrime frequency, sophistication, and impact means security needs to be top of mind for all your workloads and data.

David Grout: 5 Tips to Improve Threat Report Analysis and Action

Written by David Grout, CTO EMEA for FireEye and Yann Le Borgne, Technical Director for ThreatQuotient.

Most organisations have more threat intelligence than they know what to do with, from a variety of sources – commercial, open source, government, industry sharing groups and security vendors. Bombarded by millions of threat data points every day, it can seem impossible to appreciate or realise the full value of third-party data.

Here are five tips they shared.

  1. Select the right sources of threat data for your organisation.

When polled, the audience reported using a well-balanced combination of sources of threat intelligence. They are on the right track, but David explains that it is also important to identify the right sources for your organisation and collect threat reports from several different sources as they provide different levels of content – strategic, operational and tactical. Figure out the who, what and when for consumption and use that for your metric for success when looking at acquisition.

Yann adds that as open-source intelligence (OSINT) is free and easy to access, most organisations use it extensively. But organisations must also consider the trust and reliability of sources. Yann explains that in a classical hierarchy, the highest level of trust comes from the intelligence you generate and receive from your close network and peers, and OSINT information is placed at the lowest level. David recommends using trust models such as the Admiralty System or NATO System which classifies information from A to F for reliability and from 1 to 6 for credibility, particularly for new sources that surface during times of crises or outbreaks. Applying this scale to threat intel helps to determine what to do with the data and reduces false positives and noise generated from non-validated and unconfirmed data.

  1. Determine who will acquire the data.

In response to the next poll question, 25% of respondents said all groups have access to all threat intelligence sources. David explained that while it may be good to provide access to a broad audience, it is probably even better to have one team responsible for acquiring and analysing threat reports and only delivering information that is actionable. Not every stakeholder needs every level of intelligence.

Using the report on the Ryuk ransomware from the French National Agency for the Security of Information Systems (ANSSI) as an example, Yann explained that to do this you need to determine how the same report will impact and be used by various teams across the organisation. Different teams may use different aspects of the same report in different ways to achieve their desired outcomes, for example modifying policy (strategic), launching hunting campaigns (operational) or disseminating technical indicators (tactical). A threat report that is in PDF format requires a lot of work to translate the information it contains into actionable data for different sets of users, which is why it is important to have a dedicated team acquire the data.

  1. Structure the data for analysis.

Yann explained that the three steps for analysis include: understanding the context of report, the relevance of the report, and relating the report to any prior reports, intelligence and incidents. This process allows you to contextualise and prioritise intelligence but requires that the data be structured uniformly. Threat data comes in various formats (e.g., STIX, MITRE ATT&CK techniques, news articles, blogs, tweets, security industry reports, indicators of compromise (IoCs) from threat feeds, GitHub repositories, Yara rules and Snort signatures.) and needs to be normalised. The information you gather, in the Ryuk report for example, is expressed with their own vocabulary and translating it into a machine-readable format is necessary to link it to other related reports and sources of information.

David adds that it isn’t just about format. The volume of information across the threat intel landscape is high and different groups use different names to refer to the same thing. Normalisation compensates for this and enables you to aggregate and organise information quickly. Structuring data so that you can prioritise is critical for triage and ensures you are focusing on the threats that matter most.

  1. Use tools to help with analysis.

Yann explains that the tools you use need to support your desired outcome. According to the poll, 67% of attendees using technical ingestion (SIEM) which indicates that desired outcomes are more technical. And 15% are still handling the acquisition and analysis process manually. This is quite a challenge, particularly during a big event. A threat intelligence platform (TIP) does a good job of extracting context and can help you use the information in various ways for different use cases (e.g., alert triage, threat hunting, spear phishing, incident response) and to support different outcomes.

It is also important that the tool you select works well with frameworks like MITRE ATT&CK. David shared that MITRE is the most used framework to organise the analysis process. Customers are identifying their crown jewels and mapping to MITRE to understand which adversaries might target them, the tactics, techniques and procedures (TTPs) to concentrate on, and what actions to take.

  1. Select the right tools to help make data actionable.

Analysis enables prioritisation so you can determine the appropriate actions to take. There are a variety of tools to help make threat reports and other elements of your threat intelligence program actionable and achieve desired outcomes at the strategic level (executive reporting), operational level (changes in security posture) and tactical level (updating rules and signatures).

In the final polling question, 45% of respondents said they are using a TIP to make the data actionable for detection and protection, but few are using a TIP for forensics. Yann and David agree this is a missed opportunity and a capability teams should explore as their capabilities continue to mature. From a forensics standpoint, MITRE is an important tool to enable analysis of past incidents so organisations can learn and improve.

In closing, our experts recommend that before you start thinking about threat intelligence sources, analysis and actions, you need to understand the desired outcomes and deliverables for each of your constituents. It’s a journey that typically starts at the tactical level and, with maturity, evolves to include operational and strategic intelligence to deliver additional value. When shared the right way with each part of the organisation, key stakeholders will see threat intelligence for the business enabler that it is, and the threat intelligence program will gain support and the budget to grow.

 

 

Adrian Taylor: Communications Service Providers—Don’t Let IPv4 Exhaustion Stop Your Growth

Written by Adrian Taylor, Regional VP at A10 Networks

As rural broadband initiatives help bridge the digital divide, communications service providers have a wealth of opportunities to add subscribers, expand territory, and grow their business. However, they will first need to address the challenges posed by IPv4 exhaustion—and its impact on the cost of new subscriber IP addresses.

Since November 2019, when the final allocation of publicly available IPv4 addresses was made, new IPv4 addresses have been obtainable only at high open market prices. There is a virtually unlimited stock of IPv6 addresses available, but migration to the new standard is a highly complex prospect and impractical in the short term for many communications service providers. They need a more feasible and affordable way to support new subscribers.

Fortunately, there’s another way forward. Carrier-grade NAT (CGNAT), a standard for network address translation (NAT), makes it possible to extend the life of existing IPv4 addresses to support additional subscribers. In this way, communications service providers can capture new opportunities for growth—while simultaneously positioning their business for IPv6 migration when the time is right.

Rural Broadband Initiatives Expand Opportunities for Communications Service Providers

While broadband plays a central role in peoples lives, millions of households in both rural and urban communities still lack access to high-speed internet from broadband services from either Fibre to the Premises (FTTP) , fixed wireless internet, or mobile ISP—representing a vast potential market for providers. Now accelerating support for rural broadband initiatives and digital divide programs are turbocharging that opportunity.

Meanwhile, demand for broadband services is surging. As the COVID-19 pandemic shifted broad swathes of modern life online, average broadband network usage in the UK doubled in 2020 compared to 2019.

Rural broadband networks have performed well, thanks in part to infrastructure investments by rural broadband providers and an increase in FTTP penetration. This robust connectivity paves the way for new opportunities for both communications service providers and underserved communities and customers, facilitating the introduction of new services such rich content experiences, new forms of collaboration, distance learning, telehealth, IoT, precision agriculture, and more.

One of the problems that communications service providers will need to address upfront is IPv4 exhaustion—a significant issue, but a solvable one.

Overcoming IPv4 Exhaustion

The cost of acquiring more IPv4 addresses to support new growth has escalated rapidly over the last few years, as the last remaining IPv4 addresses from Regional Internet Registries (RIRs) have been fully allocated. IPv6 migration is a complex and long-term prospect—and even if communications service providers chose to switch over their own infrastructure, they’d still need to be able to support IPv4 at the same time in order to carry IPv4 content and accommodate IPv4 devices.

In order to accommodate large waves of new customers connecting to broadband services, many communications service providers will need to find a way to extend the utility of their current IPv4 addresses.

Carrier-grade NAT (CGNAT), also known as large-scale NAT (LSN), offers a solution. In a standard NAT design, network address translation enables a single public IPv4 address to be shared across the devices on a private network. CGNAT adds an additional translation layer to NAT that allows service providers to share their own public IPv4 addresses across the private IPv4 networks of multiple subscribers, multiple devices of a single subscriber, or multiple businesses.

By using architecture models like NAT44 or NAT444, CGNAT can expand IP address pools by 40 – 60x or more. This helps communications service providers support new subscribers and drive growth without the need to purchase new IPv4 numbers on the open market, or to upgrade or enhance home modems, routers, or cellular phones.

Building DDoS Protection into Growing Networks

As communications service providers leverage address translation technologies to grow their footprint and reach new rural broadband initiative and digital divide customers, they need to keep security top-of-mind; service provider networks are big targets for distributed denial of service (DDoS) attacks. Traditionally, a DDoS attack on a communications service provider’s infrastructure was somewhat isolated. If an individual subscriber was targeted, the attack was contained to their service. With a NAT gateway in place, however, hackers can target the gateway itself to take down the access of large swaths of subscribers. They can also target an individual subscriber and jump to the corresponding NAT gateway to propagate their attack to other subscribers.

A CGNAT solution can help communications service providers protect subscribers from DDoS attacks and ensure that the NAT gateway itself is not compromised. Mitigation techniques include IP anomaly protection to recognise and drop traffic from common attack signatures; Internet Control Message Protocol (ICMP) rate limiting; CPU overload protection caused from spoofing attacks; connection rate limiting; and automatic IP address blacklisting to mitigate attacks targeting NAT pool addresses.

Bridging the Transition to IPv6

While communications service providers address the immediate challenge of IPv4 exhaustion, they should also be making plans for an eventual transition to IPv6—an evolution that is already well underway among online content providers and large mobile network operators as they have migrated their networks to 4G and 5G. The interconnected nature of IPv6 adoption makes it a complex and long-term process.

To achieve full IPv6 adoption globally, each link in the chain must be running IPv6, from the end-user, to the carrier, to the content provider. Realistically, not all three of these links in the chain will switch over at the same time. Subscribers will always want to connect to as many endpoints as possible, including at least a few IPv4-only websites. As a result, even companies with IPv6 implementation in their networks still need to communicate with legacy IPv4 servers and applications. On the other side of the equation, IPv4 customers need to be able use services developed with IPv6.

A complete carrier-grade networking (CGN) solution should provide both CGNAT and IPv4-IPv6 migration techniques. By enabling connectivity between IPv4 and IPv6 devices, networks, and internet destinations, these solutions can help communications service providers extend the life of their current IPv4 investments while they evolve and manage the hybrid environment resulting from coexisting IPv4 and IPv6 infrastructure.

As communications service providers seek to offer high-speed broadband, while also dealing with IPv4 exhaustion, and planning for IPv6 adoption, carrier-grade networking including CGNAT and IPv4-IPv6 transition is becoming an essential platform for long-term growth.

Gary Blower: The Future of Service Management in the DevOps Era

By Gary Blower, Solutions Architect, Clearvision

Whether you view your organisation as having an agile approach or not, in 2020, companies had no choice but to drastically change their way of working as the world rapidly pivoted to remote working. Organisations that had already embraced agile principles had the advantage of being able to adapt faster to the pandemic and meet the demands of their employees, who were suddenly all working from home. Now, as we start to slowly emerge from multiple lockdowns and restrictions, one interesting side effect of COVID-19 is that it has lowered our collective tolerance for slow, overly bureaucratic processes. We all crave an agile approach, whatever our definition of agile might be.

COVID-19 has accelerated digital transformation

Digital innovation has fundamentally changed how the world operates. COVID-19 demonstrated just how much we rely on technology. And, as modern technology permeates every area of our lives, our expectations around the availability of information and the speed with which we can obtain it are even higher than they were pre-pandemic. Therefore, as lockdowns ease, the world is continuing to change just as rapidly to keep pace with the demands on businesses, who must accelerate out of recession and aggressively compete to remain relevant.

The knock-on impact of this acceleration is that organisations need their IT teams working together as efficiently and effectively as possible. Likewise, their IT service management (ITSM) capabilities must be nimble and efficient to support shifting organisational priorities, capitalise on new opportunities, and satisfy growing end-user demands for immediate and seamless service, wherever users are located.

To meet this increasing demand and requirement for speed, the flow of work between the support, DevOps and operational teams must be unified, and teams need to be empowered to deliver work with agility. IT teams are under huge pressure and are required to become even more adaptable to the challenges they face. This means that practices and workflows need to remain flexible so that teams are better positioned should situations like we just experienced in the past 12 months arise again in the future.

Traditional service management approaches can’t keep pace with demand

However, even the smallest request for change is not an easy task for some organisations and must be approved by layers of bureaucracy, which can take weeks or sometimes months. Additionally, this increased demand, combined with the ongoing pressure to lower costs, runs counter to traditional approaches to service management that emphasise risk mitigation and control over efficiency and agility—leaving some IT teams hamstrung and unable to play to their full potential. In our ‘always on’, digital world, this will disadvantage those companies unable to respond, with end-users and customers no longer willing to accept long wait times. And why should they? The COVID-19 experience showed that, when we really need to, we can completely change our way of working overnight. Therefore, many customers are now unforgiving of those that cannot accommodate their requirements or promptly meet their expectations.

One way that organisations can accelerate their service management initiatives and introduce more efficient methods to serve ever-growing business demands is by implementing Jira Service Management. This is the only ITSM solution built on the Jira software development platform. This means that users don’t have to seek the Jira application separately, and they benefit from having everything they need in one platform.

DevOps, IT support, and IT operations must all collaborate

This accessibility is important because IT teams using other service management tools often end up integrating their application with Jira for additional functionality, which can be clunky and not as streamlined. The co-existence of Jira Service Management and the Jira software development platform has huge benefits because it means that support and development teams can collaborate on the same platform and fix software issues and incidents faster. Jira Service Management was also designed with both IT and development teams in mind and provides streamlined requests and change management processes. This allows teams to make change requests without complex approvals and link incidents to problems in one click.

With other service management platforms, siloed tools between development and IT operations can result in context switching, lack of visibility, and decelerated work. As a result, integrations between Jira Software and service management tools tend to be weaker and cumbersome to manage. In contrast, tight integrations between Jira Software and Jira Service Management mean seamless and accelerated workflows between development and IT. Teams can link issues across Jira and ingest data from other software development tools, providing IT support and operations teams with richer contextual information to respond rapidly to requests, incidents, and changes.

Jira Service Management also offers customisable templates for ITSM, customer service, and business teams such as HR and finance. Furthermore, an intuitive portal in Jira Service Management makes it effortless for customers to ask for help, while the simple UI makes it easy for teams to use. And, with easily configured automations, IT teams can prioritise and resolve requests quickly.

Service management built for the DevOps era

In today’s world of digitised services and support, being able to deliver a rich and collaborative service desk, modern incident management, and change management is critically important. The world is changing fast and, to keep pace, organisations need a service management platform built for the DevOps era. An open, collaborative platform enables teams to scale operations quickly and ensure the organisations’ critical services are always on and operating at high velocity. This will ensure they can respond quickly to business change while delivering great customer and employee service experiences.

Women in Security: Meet Jan Lawford, Senior Director of EMEA Security Sales at VMware Security Business Unit

Written by Samantha Mayowa, Head of Global Communications at VMware Security Business Unit

We meet a real team player at VMware, Jan Lawford Senior Director of EMEA Security Sales, who addresses why it is so important for women to overcome self-doubt, why passion drives success and why it is key to have a growth mindset to succeed.

Jan has international tenure at brands such as Dell EMC, RSA Security, Avaya and more, Jan has extensive technical, sales, and channel experience. She also dedicates her time to help drive diversity and inclusion for the benefit of all employees and business performance internally.

By shedding light on the women of VMware Security Business Unit and their incredible successes, we hope to inspire other talented women to make their mark in the industry. Follow the VMware Women in Security Series, here.

Tell us what excites you about your new role at VMware?
One of my big passions is helping individuals develop their careers and accelerate their success. As the new leader of the talented EMEA security sales team, I am excited about the opportunity to work with each member of the team to help them define how they can be even more successful, deliver more value to our customers and accelerate the success of the security business for VMware. At the end of 2021, I want the team to look back at the success they have achieved and how as individuals they have positively contributed to the overall performance of the team.

How did you land a career in security and what led you to the VMware Security Business Unit?
I didn’t start out in tech; in fact, my first job was in the toy industry. One of the managers that I worked with left the business, entered a tech company, and asked me to join him. We are going back more than 25 years and at the time I realised that the tech industry was a great place to be.

Prior to joining VMware, I was at Dell for seven years and responsible for a sales team focused on converged infrastructure solutions, which meant I was working closely with VMware. This gave me the opportunity to really gain a good understanding of the technology, culture, and values of the organisation, all of which were a big part of the attraction. The other compelling draw was the opportunity to step back into a role in cybersecurity – which has never been more important to customers than it is today.

Who is your role model in tech or security?
I wouldn’t say I have a particular role model per se; however, I have really benefited from more informal mentoring from various individuals over the years. At the time, I didn’t always identify or recognise it as being mentored, but looking back, there are several key individuals that really helped me develop my career.

I know this might sound a bit of a cliché, but my husband has been and continues to be a huge support and influence on me. We met in the IT industry many years ago and I always get very honest and constructive feedback from him, which I admit at times, I don’t always want to hear, but this has been hugely beneficial.

What excites you most about security and the future of security at VMware?
What really attracted me to security in the first place is that the industry is so fast-paced and dynamic, with innovation happening every day. Looking back there has been a huge evolution in the industry from when I first joined and the technical capabilities that are accessible to businesses today are unrecognisable.

Security has never been as critical as it is today. The pivot to remote working has further accentuated the importance of a robust security strategy and capability, with our workplaces morphing into our living rooms, our home offices, and our spare rooms. Additionally, it’s not just about organisations losing money or brand reputation anymore; cyberattacks are causing real-world problems, putting lives at risk, in the case of the healthcare industry.

At VMware, we can not only deliver business value through technical innovation, but from a broader perspective we can have a real impact on the quality of people’s everyday lives and the challenges we face in the world today.

What advice do you have for women looking to get into the security industry?

To be successful in any industry women must first manage their own self-doubt. There is a lot of research and commentary around self-doubt being the real glass ceiling for women. Therefore, it is important for women to make sure that they are dealing with this. And when that internal voice questions whether we can rise to the challenge and whether we can succeed, we must address that nagging doubt, otherwise it can become a real limiting factor.

Throughout my own career I have always believed in being passionate about what you are doing. Passion drives success. So, try to choose a career path that you believe in and are incredibly passionate about.

It is also imperative to have a growth mindset, to always be open to learning. If you are progressing you will be facing challenges as you move to the next stage in your career, therefore it is important to invest time in developing the skills you may not possess and to believe in your ability to constantly develop and grow to meet the next challenge and the next opportunity.

Follow your instinct. I think women have natural skills and attributes that make them a great fit for security. Women can gauge risk very differently to men and they are good at identifying changing patterns of behaviour naturally so that is a great skill for identifying threat actors.

Finally, how can we create an inclusive and supportive environment for women in the workplace?

A flexible working environment will always be important. Women often have significant demands on them outside of the workplace therefore, where possible, employers should try and accommodate these demands and support a flexible work schedule. This will ensure that a work-life balance is maintained.

A transparent culture is also critical. Organisations need to have an honest and respectful dialogue internally to make sure that there is no negative bias, conscious or unconscious.

Finally, sponsorship opportunities are vital. It is important that as women start to progress through their career, that they are given the opportunity and access to sponsorship, to support them on their journey to success.

 

Seshika Fernando: Trends in open banking in 2021

Written by Seshika Fernando, General Manager, Open Banking, WS02 

2021 will be characterised by volatility and uncertainty as financial institutions navigate the economic and social impacts of COVID-19. The pandemic has delivered a systemic shock unlike anything seen in the modern era and this will reverberate across banking institutions and financial systems. Open Banking is becoming a strong driver within the financial world and it has the potential to enable communities and individuals to make informed financial choices as they recover from the pandemic’s impacts. From the evolution of open banking standards to how banks may approach open banking as a tool to achieve business goals, there are strong signs that the coming year will see open banking become a strategic enabler for financial institutions and the communities and individuals they serve. This drive will also prompt high demand for the connecting interfaces/APIs that power open banking and that will help to build an ecosystem that features a large range of diverse stakeholders from traditional banking and fintech to third party innovators. 

The following are the trends we expect to see emerging or consolidating in 2021. 

 

Trend 1Financial institutions shift from compliance-driven adoption of open banking to a key driver of business strategy. 

As progressive proponents of open banking are beginning to reap commercial benefits, more institutions will realise that there is more to open banking than compliance. Open banking can and should be a journey towards digital transformation, especially for banks looking to improve consumer experiences. This will usher in more focus on solving consumer problems and integrating them into the fintech ecosystem. It will help create higher-value offerings through partnerships, and cut costs, in a challenging business climate.  

 

Trend 2: We’ll see further movement towards a common core for standards globally 

Open banking standards are largely evolving from a common denominator – the open banking standard in the UK – and there is a lot of commonality across jurisdictions. This is a big potential opportunity for fintech firms to develop cross-border services that serve a highly mobile client base unbounded by geographical restrictions. Regulators are already considering cross-border interoperability in looking at the future evolution of open banking as seen with the second Farrell Review on the Consumer Data Right in Australia and with New Zealand’s proposed Consumer Data Right. Similar moves are inevitable in mature open banking markets in the UK and the EU as they look to promote international trade and business and improve consumer experiences. 

 

Trend 3: COVID-19 drives need for greater financial literacy and flexibility provided by open banking 

The millions of people affected economically by COVID-19 need information and flexibility to help make better financial decisions on loans, investment and savings products that will give them better outcomes as they try to recover. The pandemic has truly heightened the need for an ecosystem approach that goes beyond single institutions and helps consumers access and evaluate alternatives across the whole spectrum of products and services.  

There must be greater visibility provided of the different options that people can access to help get back on their feet. This could be sub-prime lending, alternative lending, switching to a better product or investing in a different way that suits their current position and desired future state. Open banking enables third parties to create applications that give users a fuller understanding of their position and actions to take – this couldn’t be delivered by a single bank – and this will be very valuable to consumers. It is part of a broader drive towards data-driven decision-making for consumers across markets – a move that goes from “open banking to open finance to open life”. 

 

Trend 4: US financial regulators will start to “catch up” with the market where industry-driven standards have seen wide adoption 

The complexity of the US financial regulatory framework means we have not seen the same regulator-driven leadership to establish open banking standards as we have seen in other markets like the UK. In this vacuum, the initiatives like the FDX API standard developed by  a diverse group of industry players have filled the gap in terms of providing a common language for secure, permissioned data sharing, recording impressive traction with 12 million users already. But regulation is starting to catch up, with the Consumer Financial Protection Bureau (CFPB) announcing in October an advance notice of proposed rulemaking. This is the first step to creating formal open banking regulation in the US. As it has with other markets, we expect that the U.S. government will codify and build on the industry-defined standards that are working today.  

 

Connecting the open banking ecosystem 

Underpinning all these trends will be demand for connections between ecosystem players: banks, fintech firms and other third parties. The way to do that is to build good interfaces and those interfaces today are APIs. APIs have to be great, otherwise fintech firms will not consume these consistently enough to deliver business results for the bank. 

The challenge is to be able to create the right APIs and then to ensure they are reliable, discoverable, usable, high-performing and flexible to offer long term value to both API consumers and their end-consumers. Banks will need the right technology, the right partners, and the right kind of commitment in implementing a robust digital strategy as it looks to move past the often-difficult early stage to build strong adoption across the ecosystem. 

 

Jack Underwood: How engaging directly with your customers is central to developing a great app experience

Written by Jack Underwood, Founder and CEO of Circuit

In the world of tech, there’s few things as frustrating as a clunky product. A problem may be widespread and your solution may be brilliant but if you can’t nail the interface, it’ll be deleted as soon as it’s downloaded. A smooth user experience is therefore the holy grail of app development but there seems to be confusion about how to achieve it. The answer is simple: skip to the end and get the intended user involved from the start.

A good customer experience is one of the key elements that sets a product apart. Whilst innovative tech is great, if your features are slow to load, the formatting’s off or your interface isn’t intuitive then user annoyance will quickly mount and override usefulness. We’re eager for new, digital ways to make our lives easier and more efficient – last December analytics firm App Annie reported consumers were on track to spend $112 billion across iOS and Google Play in 2020, representing a year-over-year increase of 25%. But if the software itself isn’t easy to use or efficient then it won’t have a place on a home screen. If brands don’t want to watch customers walk away then they need to be getting their product experience right.

A seamless user experience is important for any digital tool but for some it’s absolutely critical. In products that are designed to be used repetitively, or that must be in use for many hours in order to fulfil their function, any issues are instantly amplified by the sheer amount of time a user will spend with it. And these issues directly lead to wasted time: if a product is used 15 times a day but its user-unfriendliness means a task takes 5 minutes when it should have taken 3, that’s half an hour of time wasted every day. If your product is designed for constant use then you have nowhere to hide and its flaws are directly impacting your customers.

The stakes may be high but the solution couldn’t be more simple. What’s needed to create a great product experience isn’t a shedload of money or a team of Nobel Prize winning scientists – it’s having an in-depth knowledge of the end user and understanding the ways they will interact with your product. The key to building a great product is examining the complexity of their needs, the secondary problems they face and their interaction with competitors so you can create the features and capabilities they really need, even if they don’t know these themselves. It’s a waste to wait until the product testing stage to get this input: trying to graft on new ideas at a later date is hugely inefficient, ineffectual in terms of integration, and you’ll be tempted to ignore these crucial insights because of the cost of incorporating them. You need to be engaging with your end user from the very start.

There are a multitude of ways product developers can do this. Whether it’s running review sessions, setting up specific focus groups, conducting detailed surveys or physically experiencing a day in the life of the end user, there are countless strategies for getting the insight you need. We’ve found that actually bringing product users onto our team and employing them part-time has been hugely effective in ensuring we’re truly building a product that serves their needs. There’s no one magic method for understanding your user; it’s a process of using a combination of tactics and working out which are most appropriate for you.

But even when your product is launched, the importance of user feedback doesn’t end. Product development is a continuous journey and in order to stay competitive, brands need to be alert to developments in the market, new opportunities and arising issues. Proactively seeking user feedback not only allows you to keep ahead of trends and refine the user experience still further, it establishes a unique relationship of trust by demonstrating you value a user’s individual opinion and have their best interests at heart. This leads to greater consumer loyalty and free word-of-mouth marketing, enabling you to attract a larger audience.

As tech brands seek to stand out in a crowded market, they need to wake up to the unparalleled resource they have within easy reach. Engaging early with end users allows businesses to eradicate tech frustration, create bespoke features that really work and build a community around a product.  It turns out the ending is a very good place to start.

Adrian Taylor: Top 5G Core (5GC) and Mobile Network Predictions for 2021

By Adrian Taylor, EMEA Vice President, A10 Networks 

 Contain your excitement; 5G is coming (again). However, wasn’t it actually launched over two years ago? 

For those not familiar with the nuances of 5G technology, 5GC (core or standalone) takes 5G deployment to the next level and replaces the 4G packet core with a new, cloud-native core using containers and following 3GPP specifications (release 15). This is somewhat separate from the market-by-market launch that most operators publicise, and the activity is less visible to the casual subscriber. 

Below, I have made predicted some of the key 5GC deployment and adoption trends for 2021. 

2021 Prediction – Over half of mobile operators will have launched 5GC (standalone) by the end of 2021 

Most mobile operators that have launched 5G have chosen what’s called a ‘non-standalone’ implementation. That is a hybrid of 4G and 5G that allows mobile operators to offer much of the 5G capabilities to their subscribers while still leveraging existing investment in their 4G packet core. Operators are eager to take advantage of the benefits of 5GC (standalone) – greater service agility and lower costs. The survey revealed that operators are committed to 5GC (SA or standalone) implementation, with 93% of mobile operators implementing within a three-year window and investing in multiple 5G security options. 

2021 Prediction – Half a billion mobile subscribers globally will be using 5G by the end of 2021 

Mobile operators also see rapid adoption of 5G over the next three years by subscribers as 5G deployment accelerates. Most operators said that within five years, at least 25% of their traffic would be carried via 5G – with 40% of operators predicting that most of their traffic would be carried by 5G. This is consistent with the recent Ericsson Mobility Report that forecasts 56% of total mobile data traffic will be 5G by 2026. 

That’s a significant leap from today where almost half of operators report they have no traffic on 5G core at all. For 2021, 9% of operators say that most of their traffic will be on 5G with 70% predicting less than 50% will be 5G. 

2021 Prediction – Three-quarters of mobile operators will have reduced 3G traffic to 25% or less 

It’s really hard for mobile operators to get rid of old technology. 3G still exists in most mobile networks despite rapid 5G deployment. This is a combination of subscribers that won’t give up their older handsets, specific geographic areas, such as rural areas, that have legacy equipment and regulatory and industry practices that require a lengthy process for “sunsetting” older technologies. In North America, AT&T shutdown of 3G is expected in 2022; Verizon in 2021. 

For example, today, only 13% of mobile operators surveyed have managed to eliminate support of 3G. By 2025, most operators (60%) said that they will no longer support 3G. That means that by 2025, 40% of operators will still carry 3G traffic. This also increases concerns around 5G security, since older technologies have multiple security vulnerabilities that will still be present in these multi-generational networks. 

2021 Prediction – 2G will finally disappear in North America, but not in Europe 

By the end of 2021, all major mobile operators will have shut down their 2G networks. In Europe, however, the shutdown has been complicated by the use of 2G in smart meters and eCall modems in cars, which initiate a call to send information, such as the location of an accident, to emergency services. 

2021 Prediction – Mobile operators will build more relationships with cloud providers for Mobile Edge Compute (MEC) services 

Nearly all mobile operators state that mobile edge compute (MEC) is a vital part of their 5G deployment plans and most are actively deploying or will deploy within the next year or so. IDC forecasts 50% of all new infrastructure deployments (enterprise as well as service provider) will be at the edge by 2023. I believe that mobile service providers will also jump on the advantages of mobile edge compute, but take a more measured, strategic approach to their use of MEC, at least in the near term.  

By 2025, we see most mobile operators will have deployed 5G (standalone) combined with MEC and will direct up to 25% of their traffic through these nodesOperators will also use strategic partners for their enterprise customers that want the lower latency that a mobile edge compute service provides.  

2021 Prediction – In 2021, DDoS Detection and mitigation will become the top security investment priority for MEC networks 

It’s already going in that direction. DDoS attacks are becoming more frequent, intense and most are smaller in size, making them harder to detect. The average attack size is only 12 Gbps, with most attacks being under 5 Gbps. Recent research shows 10 million available DDoS weapons.  

The Heavy Reading 5G Security Report shows that small DDoS attacks are the primary reason for investment priority for MEC. And with MEC capacity as low as 600 Mbps, mobile service providers and their new 5G enterprise customers are at substantial risk for these common DDoS attacks. 

Overall, in spite of the pandemic, we believe that demand for 5G services will be strong and that subscribers will continue to find more value and use cases from the growing 5G capability. 

Anthony Webb: E-Merchants – Secure Your Online Sales from Cybersecurity Threats for 2021 and Beyond

Written by Anthony Webb, EMEA Vice President, A10 Networks

Last year, online retailers started to offer prolonged sales periods, in the hopes of recouping revenue lost through the closure of many ‘brick and mortar’ stores, due to the COVID-19 pandemic. Removing this element of ‘seasonality,’ retailers quickly pivoted to predominantly online sales, leading to a significant uptick in users and devices connecting to websites than in recent years.

 

Good Cybersecurity is Crucial for eCommerce Success

The good news for e-tailers is that overall sales are expected to continue to grow this season. This has added importance when many e-commerce businesses have faced unprecedented disruption. However, one thing is clear: Online sales will take centre stage. In a recent report, it was reported that global ecommerce sales will reach $4.2 trillion and make up 16% of total retail sales and this is only set to continue as we venture into 2021.

However, just as online sales are at the forefront, so should cybersecurity. Retailers aren’t the only ones looking to capitalise on the increase in online spending. With the element of seasonality largely disappearing within e-commerce, hackers have an enlarged window of time to profit. We’ve already seen a huge uptick in cyber-threats due to COVID-19. Now, continuously busy e-commerce channels provide cyber-criminals with additional motivation to launch their attacks using some of the below tactics:

 

Phishing – Phishing and its variants, including spear-fishing and whaling, are email-based attacks that leverage social engineering techniques to fool recipients into providing sensitive information to the attacker. While spear-fishing and whaling attacks are more targeted than phishing, all three forms attempt to get the victim to read the email, click on a link, possibly open an attachment, and ultimately disclose valuable personal or corporate information.

 

Ransomware – Ransomware attacks seek to extort money from victims by encrypting access to files or entire systems until they pay the attacker a ransom, have become increasingly popular in recent years. Much of this has to do with the potential to make large sums of money from the ransoms. Another reason for the rise in ransomware attacks is the availability of Ransomware-as-a-Service (RaaS) kits, which are inexpensive to purchase on the black market, making it easy for novice hackers to launch their own attacks. Phishing emails are the top threat vector to distribute ransomware.

 

Distributed Denial of Service (DDoS) – DDoS attacks are designed to stop a computer, server, website, or service from operating by flooding it with internet traffic generated by an army of bots called a botnet. The tremendous growth in Internet of Things (IoT) devices, many of which are not secured, has made it easier for attackers to take control of more devices and create botnets. DDoS attacks can be especially damaging to e-commerce businesses if customers can’t access their websites to make purchases.

 

Malware – Malware attacks take many forms including viruses, worms, spam, spyware, and more. Some malware threats such as spam are more of an annoyance, while others such as viruses and worms can spread across a network infecting systems and negatively impacting their performance and user productivity. Similarly, spyware can slow down systems. However, it can also be used to report sensitive information such as passwords back to the hacker.

 

Injections – Injection attacks such as cross-site scripting and SQL injections are used to exploit vulnerabilities in web applications by injecting malicious code into a program, which then interprets the code and changes the program’s execution. In other words, it gets the application to do something unintended such as alter the behavior of a website or expose confidential data like login credentials to the attacker. E-commerce businesses hit with an injection attack could find their customers redirected to a fake site which illegally harvests customer information.

 

The Consequences of Poor Cybersecurity

With e-commerce transactions continuing to boom, effective cybersecurity takes on added importance. If e-commerce merchants are not prepared to stop malware, DDoS attacks, and other threats, the consequences of a successful attack could be the difference between surviving and ceasing trading. Here’s what businesses could be facing:

 

Lost Revenue  Any downtime to a web server that prevents customers from making a purchase is damaging to online sales and can potentially have a severe impact, especially for smaller organisations.

 

Data Theft – The increase in online shopping is a lure for cybercriminals to launch attacks aimed at stealing corporate and customer data. Phishing emails claiming to have information on fake shopping receipts, shipping status, and customer surveys are very popular in the run-up to Christmas.

 

Disruption of Services – DDoS and ransomware attacks can target services that we deem essential. E-commerce sites, public utilities, and schools are just a few examples of their victims. Shutting down access to a service, even for a short period time, can have major financial and social impacts.

 

Damaged Reputation – Damage can extend beyond short-term financial losses and data theft. Consumer confidence and brand reputation can quickly erode when consumers have a poor online experience. Customers aren’t shy about using social media to express their displeasure.

 

Reduced Productivity – It’s not just customers who feel the impact of a successful attack. If employees can’t access the applications they need to do their jobs, expect to see a drop in productivity with an accompanying rise in undesirable workarounds.

 

Steps to Take for 2021 and Beyond

Cybersecurity isn’t just something to think about during traditionally busy shopping periods such as Christmas. It’s an everyday concern. Fortunately, there are some things that organisations can do to keep applications, networks, and the business safe from threats, especially during peak online shopping periods.

First, look for a solution that provides DDoS detection and mitigation to ensure services are continually available to legitimate users. Hackers have learned how to weaponise IoT devices to launch complex multi-vector and volumetric attacks, capable of bringing down application servers and entire networks.

Second, protect web-based applications with web application firewall (WAF) technology. Outdated applications are especially vulnerable to attacks. A WAF will secure them from hackers looking to exploit HTTP and web application-based flaws.

Third, find solutions that meet current and future platform needs. Organisations may not have transitioned to the cloud yet, but they’ll likely have some cloud-based apps. They must be sure their solution is ready when the company is ready, whether it is moving to a hybrid cloud or multi-cloud infrastructure. And finally, continue to educate employees on the need for good cyber hygiene. According to a 2019 IBM study, 95% of cybersecurity breaches are caused by human error.

With this shift to online a potentially permanent one, e-commerce merchants should expect these sustained levels of activity going forward, throughout the entire year. Therefore, it is imperative that e-commerce businesses secure applications, servers, and networks from cyber threats at all times.

 

Harjott Atrii: Why moving to the cloud makes business sense for the financial sector?

By Harjott Atrii, Executive Vice President and Global Head, Digital Foundation Services, at Zensar

Traditionally, the financial sector has been a late adopter of cloud technologies, focusing instead on on-premise IT frameworks. However, the last year has seen an upheaval in the way IT is consumed across all businesses. Last year we saw global businesses across the BFSI sector come to an almost standstill. Those firms that were early adopters of digital were the only ones to remain customer centric for most part by staying operational. Furthermore, customer experience took centre stage as all users demanded the superior engagement offered by leading e-commerce firms.

Gartner estimates, the worldwide spending on public cloud will grow by 18.4 percent in 2021 to a total of 304.9 billion dollars, up from 257.5 billion dollars in 2020. Further, another Gartner report states that the proportion of IT spending that is shifting to cloud has increased in the aftermath of rapid adoption of the “work from home” model, with cloud projected to make up 14.2 percent of the total global enterprise IT spending market in 2024, up from 9.1 percent in 2020.

Moving to the cloud has multiple inherent benefits for the financial sector because of its ability to offer pay as you use, platform agnostic features, cost efficiency, business continuity, speed, agility and flexibility.

Recently, we helped one of our BFSI clients adopt Hyperconverged Infrastructure to deliver flexible, scalable, and easy to manoeuvre infrastructure. The client is a global Fortune 500 provider of risk management products and services headquartered in the U.S. They provide specialty and niche market insurance products across diverse insurance sectors and operate in over 20 countries. We were successful in delivering higher levels of availability and reliability with data-driven capacity planning, leading to a faster network service (for both on-premises and cloud) with migration to infrastructure. The business saw a significant reduction in Total Cost of Ownership, and a 70% increase in cost savings near the zero-touch operation of a multi-cloud solution using our in-house Artificial Intelligence for IT Operators platform.

I’ve found it extremely interesting to observe the growing interest amongst global financial entities, who are choosing to move to the cloud and adopt some key customer facing operations too. I believe this sector is realising that getting future-ready means adopting to the cloud and becoming a digital enterprise. The speed at which cloud services are adapting to sector specific needs is also helping by allowing organisations to make quicker and more justified decisions. Plus, cloud providers are adhering to creating regulation friendly offerings for the financial sector which is making it easier to deploy.

Taking ahead the example of the work we did for our customer cited earlier, the convoluted state of its technology was the biggest challenge that the customer faced. The organisation had a disparate technology landscape with its primary data-centre in the United States. A complex application landscape with fragmented on-premises, hybrid and public cloud footprints were impeding business agility and delaying the time to market of critical products and services. Additionally, there was a cost burden of managing sizable on-premise infrastructure.

The lack of the right technology needed to support a fully automatic Infrastructure as Code (IaC) private cloud offering, had given rise to a monolithic architecture driven by manual and time-consuming processes. The absence of automation and orchestration for its vast on-premise environment led to excessive manual intervention and a large number of human errors which in turn reduced the product release velocity for business users.

Below are some of the key benefits the financial sector can leverage when adopting to the cloud.:


Scale and speed

Cloud accelerates capacity and operational bandwidth, while doing so at a fast pace. Creating new offerings, taking them faster to market, while remaining within regulatory norms is now easily possible.

Superior customer experience

Customers are spoilt for choice when it comes to digital buying and experience. This was felt very visibly during the early days of the pandemic fraught with lockdowns etc. making digital the only way to transact. This has resulted in customers demanding better digital offerings and a simplified, easy to use experience when interacting with brands.

Cost-efficiency

One of the biggest learnings from the global pandemic has been the need to be lean and efficient while running all businesses. The financial sector did feel the brunt due to its heavy reliance on maintaining the expensive on-premise frameworks. The return on digital is very visible for a financial entity looking at remaining competitive and wanting to respond timely to market dynamics.

The cloud enables businesses to launch products faster in the market and at a lower cost. The ability to apply sophisticated technology – such as analytics quickly – can transform an organisation’s ability to adapt and create products and services in response to market and competitive changes. This helps to maintain differentiation and competitive edge in a dynamic marketplace.

Scalability and flexibility

One of the most lucrative benefits of moving to the cloud is its scalability, given the sudden need to respond to market volatility. The recent global situation clearly underscored the need for scale at operations as well as flexibility to respond to changes effectively and efficiently. Cloud technology can support the financial sector by gaining customer insights with behavioural analytics so that every click and every tap on their offerings or website can be managed.

Secure cloud

This sector is all about customer trust and secure operations. Security is at the core of all new cloud offerings. In fact, cloud security has matured to include sophisticated security measures and controls. Security and data privacy are some key concerns that impact adoption across this sector. However there have been strides in the manner in which cloud is designed to drive consistency and automation across the growing attacks and a heightened focus on security and data privacy features are integrated also. The best option lies in adopting a hybrid cloud model which complies with privacy regulations with secure protocols across public and private data.

While the above reasons do offer many reasons to move to the cloud, financial services have to keep their customers at the core while making this decision. Adopting a robust cloud strategy helps in not only enhanced customer engagement, but it also results in a more successful business model. A cloud strategy will certainly help in making a financial services business more resilient, dynamic, competitive and ready to embrace future opportunities, today.