Nephos Approved for G-Cloud 11

London, 11th July 2019:  Nephos Technologies, a leader in data services, has successfully been selected as a preferred supplier on the UK Government’s G-Cloud 11 framework.

The G-Cloud framework is an agreement between the government and suppliers who provide cloud-based services. G-Cloud has been streamlined to enable public sector bodies to consume Cloud services, support and professional services faster than is possible by entering into individual procurement contracts, and with a clearly defined cost model. 

All public sector organisations can use the Digital Marketplace and G-Cloud framework, co-ordinated by the Crown Commercial Service, part of the Cabinet Office.

The Governments continued drive toward digital services will see the G-Cloud framework continue to be a vital tool in enabling digital transformation for the public sector.

As part of Nephos’ inclusion within the G-Cloud framework, three professional service offerings, each focused on helping customers develop a fit for purpose data services strategy, have been approved:

•    Smart File Assessment – Nephos Smart File Assessment provides visibility into the profile of your unstructured data, enabling us to develop a fit for purpose data management strategy

•    Smart Discovery – Nephos Smart Discovery identifies risk associated with the use of known and unknown Cloud services as well as IaaS configuration compliance.  Without this understanding, it is almost impossible to protect your key asset: your data

•    Smart Strategy – Nephos Smart Strategy provides a strategic technology roadmap, based on actual data, that best suits your organisations’ long-term digital strategy.  We identify and recommend the right technology strategy for your organisation with clear financial models for the recommended approach 

Lee Biggenden, Co-Founder of Nephos Technologies, said: “The G-Cloud framework has been pivotal in enabling public sector Cloud consumption since its inception, and we believe that that trend will only continue as government bodies move further into the Cloud and digital transformation.  We’re delighted to be part of the G-Cloud framework and see this as a great platform for us to be able to extend the services that we already provide to our enterprise clients, to the Public Sector within a clear and consistent commercial framework that works for them”

The UK Governments vision is to transform the relationship between citizens and the state – putting more power in the hands of citizens and being more responsive to their needs.  They want to be able to: 

•    better understand what citizens need

•    assemble services more quickly and at lower cost

•    continuously improve services based on data and evidence.

This dependence on data to transform government services and make government itself a digital organisation requires a strategy based on that data, and utilising new technologies that are capable of supporting that approach today and in the future. This digital transformation is laid out as part of the governments 2019 transformation strategy:  

The services that Nephos Technologies have had approved as part of the G-Cloud framework are ideally suited to help government organisations of all shapes and sizes to meet these changing demands.

About Nephos Technologies

Managing today’s data requirements with legacy technology defies progress. How you store, protect, distribute and process data can enable positive change; but data is ever-changing, can be hard to handle and is unique to your business.

At Nephos, we know a one-size-fits-all solution doesn’t work. We listen to you, remain independent and focus on the data, not the product. We make a plan for your business that’s genuinely right for you. Our job is to be there with you, every step of the way.

For further information, please see

If you’d like to read our recent whitepaper on securing data in the Cloud, then you can download that here


Hena Begg – Marketing Executive, Nephos Technologies
Email: [email protected]m

I’ve been thinking about Slacks recent comments that maybe, just maybe, the security of their platform may not be where it should be – my favourite part of the statement (and if you’ve not read it, you can get to it here: “The security measures we have implemented or integrated into Slack and our internal systems and networks (including measures to audit third-party and custom applications), which are designed to detect unauthorised activity and prevent or minimise security breaches, may not function as expected”

Could you imagine Mercedes, or Volvo making a statement about the brakes on your car like that! The breaks on your vehicle may or may not function as expected!

To me it begs the question – if Slack weren’t going public, and subsequently being forced to disclose this, would it ever have come to light?  Then you ask yourself, if a company like Slack – trusted by firms like NASA – have insufficient security, what about those app’s that have less investment, that are less enterprise ready? 

I don’t want to turn this into a discussion about whether the Cloud is secure, or the fact that Cloud services operate a shared risk model – I would hope that argument is long past – but what this issue highlights is that it’s not just about the risk of unknown, untrusted applications. 

Trusted Vs. Untrusted – Is There a difference?

People for a long time have made a big deal about Shadow IT, and don’t get me wrong it’s a problem, but it doesn’t mean you can neglect the security of the app’s that you knowingly use – Slack’s a great example of this. 

To put this into context we recently did a study for one customer identifying their known and unknown Cloud app’s through our Smart Discovery service, before we started there were circa 30 cloud based app’s that were known/trusted app’s. Through the process of our assessment we identified over 3000 cloud services in use. Now the services that that customer didn’t know about pose an obvious risk, but the trusted app’s have risk too.

To give you an example this customer uses Slack and as part of a web project that has been connected to a front end web service by an outsourced developer as part of a project driven by the marketing department. The terms of this API connector software states that if you use it, they have the right to use the data processed by it.  Both trusted / known applications but with huge risk.

The ability to create an interconnected set of applications, through API services like Zapier, makes controlling your data all the harder. Statistically, for every anchor tenant e.g. Office365, there will be around 25 eco-system applications connected to it. They may be enterprise ready, they may not, but it becomes irrelevant when you don’t even know you’re using them.

I guess my point is that these services, whether known or unknown, should be treated the same.

Data Leakage, Viruses and Malware Exist In Trusted Platforms 

Malware, Viruses and Data Leakage aren’t new problems. Organisations have spent billions on endpoint security technologies to protect from this sort of thing, so why is it acceptable to download data from a 3rd parties Dropbox to yours without taking the same precautions?

I’m all for using things like O365 and G-Suite – I mean really who wants to managed something like Microsoft Exchange – but it’s a really soft entry point for malicious content that most people don’t have covered off. Likewise DLP around Cloud storage services like OneDrive are weak for most firms, sharing data out of the business intentionally or otherwise is simple. If you’re using O365 for example, ask yourself – do you stop the use of things like publicly accessible links? Do you screen for Malware on downloaded content to OneDrive?

This same issue is now extending to IaaS platforms like Amazon S3, though Malware here is, for now at least, less of a risk in my opinion, data leakage is absolutely an issue; particularly when you consider how many third parties are employed by large enterprises for app development functions that have poorly controlled access to Cloud infrastructure. 

These are known, trusted services by the worlds largest firms, yet if not managed correctly can carry just us much risk as any unknown app.

It comes back to a really basic point – you can’t stop what you can’t see! It doesn’t matter whether the application that you’re using is trusted or not, whether you know about it or not, you need visibility into what those applications are doing, where they pose risk and and who’s doing what with them. 

Ultimately we can’t take for granted the fact that these so called enterprise grade applications, like Slack, are securing our data effectively. Your data is still your data, your risk is still your risk regardless of whether it’s in a platform you’ve knowingly invested in, or an app being used outside of the governance of the IT department.

Imagine a world where you could look at a dashboard and get a complete overview of your data; How much you had, where it was, how much it was costing you, if it was secure, how to optimise it, which were the most valuable elements and all this was done automatically for you! Hopefully this will be something that will come along in the future, but now think about how your data estate currently looks; No real idea how much you have, where it resides, how much it is actually costing you, if it is totally secure (especially with some of it now residing outside your firewalls) and which elements are actually valuable to you.

As the forecasted data deluge doesn’t look like its stopping any time soon, the increasing importance of being able to profile and analyse your data is coming more and more to the fore. Vendors are pushing big data analytics like it’s going out of fashion, and apparently companies should be keeping all data they have ever created, just in case they might need to use it in some undetermined time frame. Unfortunately, most companies’ IT budgets are decreasing rather than increasing, to deal with all this additional data. So, we are now at a stage where how data is managed and stored needs to be looked at in more detail, but where to start?

Add to this the fact that the price of storage is coming down, dedupe ratio’s are increasing and storage is generally more efficient than it was 5 years ago, it is quite easy to miss this ticking timebomb within your network. For most companies, the price of storage currently, is going down faster than the growth of their storage, so they are being tricked into thinking the problem doesn’t exist. There will come a point very soon where data growth will start exceeding the decrease in cost of storage, and at that point they will be left with a huge bill for storage that just increases year on year.

Historically it was a little easier, with the majority of data being held at a centralised location, whether that is one, or a small number of data centres, making the management of that data simpler thing to control. Now with data being created in most if not all sites, and collaboration being one of the key business drivers, we have moved to a more distributed data landscape. The problem with this is that most of the tools currently on the market are setup to profile and manage data in as few locations as possible, as they were designed for a time when the data was centralised.

So where to start? Well, the first place a company needs to start is information – which is ironic when talking about data! Without the relevant and correct information about your data how can you possibly make decisions around what to do with it? There needs to be some way of profiling all of your data to be able to get a picture of how much data you have, where is resides, how its used and how old it is. Once you have that information you can start to build up a picture of how data is being used within your organisation. The first, and easiest place to start, is looking at the older data that hasn’t been accessed for a long time and moving that on to a lower tier of storage (likely object storage, cloud or on-site). The next would be to look at what data collaboration needs you have and see what is the best solution to be able to solve them and lastly is to try and identify which parts of your data is the most valuable/right to be doing some for of data analytics on top.

All of these requirements leads to the fact that a Data Lifecycle Management (DLM) platform is becoming much more of a requirement for all organisations. The ability to scan your data, wherever it may reside and then be able to make policy based decisions around where to place it without any manual intervention becomes increasingly important.

Let’s get something very clear from the start, Docker isn’t a replacement for application virtualisation. Instead it is a new, more agile, way of developing and deploying applications into production. Now I have that off my chest we can talk about how the world is moving from virtualisation to containers, in the same way that they moved from physical to virtual all those years ago.

There are several people in the market, primarily working around VMWare that are trying to convince people that you will need both moving forward, and that they are the best positioned to deliver that functionality. Whilst that may be true today, where organisations are using containers for niche projects and are still primarily virtualised, as that swing changes so will the requirement for the management and orchestration platform. The main question then becomes why would you pay a premium for a VMWare solution to manage a mainly containerised environment?

Based on this soon arriving new model, VMWare believes they will become the orchestration and management platform of choice for multi-cloud, multi-service based deployments. The question I have is why? More and more customers we talk to now either use multi-hypervisors or considering using multi-hypervisors, they have some form of cloud deployments (adding an additional management layer) and are using containers in some way. VMWare has always been the Oracle of the hypervisor world, in that they were reassuringly expensive, but for a long time the only game in town. That isn’t the case anymore, so why would you pay a VMWare premium if they are no longer the mainstay of the environment you are managing?

If you look at most of the organisations that are running our digital world today you would be hard pushed to find a VM. Companies like Google, Twitter, Facebook and the vast majority of the web based companies have already made the move to 90%+ container based application deployments. In a recent survey by Docker, 90% of companies interviewed are using them in application development and 58% of companies are using Docker in production. So even just by playing those numbers through over the next few years both numbers will grow substantially as the applications currently in development more into production.

The main problem with containers right now are the enterprise feature sets that are missing, around orchestration and management. As everything is included in the container; compute, networking, storage and control plane management across the entire environment becomes complicated and cumbersome as you could end up having thousands of containers running different microservices. Also, connecting all the microservices together isn’t the easiest thing in the world either…. Currently! With things such as Docker DC, CoreOS, Kubernetes, Openshift, Mesosphere and others making huge strides in providing the enterprise feature sets that organisations are looking for it is only a matter of time before this changes.

In the same way that organisation now run mostly virtualised with some physical servers for specific applications, I believe the future will be organisations running mainly containers with some virtual and some physical for specific applications that don’t work very well in containers. My prediction is that within 18 months you will see larger organisation primarily using containers as their application deployment model instead of VM’s, and within 3 years it will become the mainstream way that organisations of all sizes, deploy and manage applications. Will VMWare still be the mainstay of the application orchestration and deployment market? Maybe, but its gonna be fun to see how they keep themselves relevant with all the changes coming and the more agile, dynamic players in the space trying to steal their lunch.

I read a blog post recently by one of the large infrastructure vendors talking about “data lakes” and how badly customers need them. I happen to pick on this one blog, but I have read so many recently that I felt I wanted to write something from the customers that we are talking with around this subject on a daily basis. I find this whole topic slightly ironic and an example of vendors trying to lead the customer rather than the other way around. Don’t get me wrong, there are customers that are using data lakes successfully and for a good reason, but that really isn’t the solution for the problem.

This specific blog made a couple of points around why data lakes are so critical to a businesses future success and that if you wanted to have a decent data analytics strategy it was imperative that you have a data lake. The points covered were:

Data Lakes Simplfy Data Storage – Well yes, that is a fairly obvious point. If all data is in one place it is a much simpler data storage strategy. The point I would make would be how easy is it for any large customer to actually do this?? So far on previous iterations of storage around block and file that has been near on impossible for most large customers so why would it be any different now?? Oh, I know, its because we are using a new buzz term “data lake” and all customers should want one of those.

Data Lakes elimates silo’s – Yep, another fairly obvious point. If you can centralise all data all silo’s go away. Wouldn’t this just be a wonderful world if that happened! This is such a ridiculous point and almost impossible for any company of any scale. Unfortunately, there are a number of very valid reasons why silo’s are a part of your business; legal, site location in the world, business unit separation, different data types (structured, unstructed etc.), location created (public cloud) as just a few examples where silo’s may be a part of everyday life.

Make it easier to access – There specific point was that big data is broken when data is not easily accessible. I completely agree, but that doesn’t technically mean it needs to be in one place, just that the analytics engine needs to get better at accepting and combing data from multiple sources.

Now if this blog this wasn’t actually a blog about data lakes, but rather a blog about the benefits of a centralised object storage then I would have completely agreed with every point they made. Data is a lot easier to manage, cheaper to store and easier to access if it is one central lower cost object store. That’s just a fact, but how many customers can actually do that with their data?. Even if you do consolidate as much as possible, and build this utopia called a data lake, I will guarantee you, any customer will still have a number of different locations for key data sets. That is just the way a business is run.

I’ve been to several of the Big Data Summits recently and talked to a number of customers looking into these areas and you know the problem that they are trying to solve?

  • How do I build an analytics environment where I can interrogate multiple data sources?

As most large infrastructure vendors don’t have a solution for this, the actual problem, their solution is a “data lake”! Put it all on the same infrastructure in a centralised storage solution and that is so much easier – right??

This is the polar opposite position that most of the big data software vendors and the entire Open Source arena is taking, and they have been at this a while longer than most of the infrastructure vendors. I would argue that the big data or data analytics arena is an area that is run by software vendors rather than hardware and with the software becoming more enterprise friendly and increasing the amount of features that an average enterprise required. I think the software in this space is becoming more and more encompassing of the wide range of requirements and has had a lot more money invested into it than the hardware it sits on.

Now I’m not here to say which strategy is right or wrong for any customer, and I’m sure that there will be a number of customers that need both, but just be cautious about the Emperor’s new clothes which is now called data lakes  and have a look into what your software requirement is before looking at the hardware.

As we welcome in the new year, we at Nephos have taken a glance into our crystal ball to predict what we think will be some of the emerging technologies to go mainstream in 2018, all of this without even mentioning things like GDPR or MiFID 2 as these are compliancy elements that have to come in this year:

  1. Increased abstraction in application deployment

As companies are moving more and more into a Continuous Improvement/Continuous Development deployment methodology and roll-out more apps based on microservices, we anticipate that the use of things like Docker, Kubernetes and Mesosphere grow with it, as a key component to making those types of flexible deployments work. Also, with companies looking at being able to spin up and deliver services in near real-time, the only possible way of doing that is via these types of deployment methodologies

  1. Companies will look to deploy AI for key applications

2017 saw a number of companies begin to test AI based applications, and WEbelieve that 2018 will start to see it roll out into key business applications, especially around any algorithmic, picture, risk, relationship data based app’s. Because of this move to more AI driven applications we’re expecting GPU’s to experience triple digit growth and high performance NAS’s will become more mainstream than just the rendering market that they play in currently.

  1. 2018 will be the year when Hyper Converged becomes the default option

This one’s self-explanatory and realistically the most obvious, with the huge growth in this space over the last 5 years and the reasons for deploying it continuing to grow. 2018 will see companies adopt a “HC First” strategy for both data centre and remote deployments, with biggest challenge coming from the continued consolidation of the vendors in that market.

  1. Blockchain will become the standard for finance based transactional applications

Towards the end of 2017, Bitcoin was grabbing all the headlines, but we heard very little about the technology that actually drives it – Blockchain. Most, if not all, financial organisations have been looking into areas where they could utilise Blockchain based technologies, with the majority of Fintech start-ups that are launching focussing on there too. We’re expecting 2018 to be the year where we’ll begin to see financial services companies turning to Blockchain for key finance and risk applications, with potentially wider adoption in other industries like legal services, media and rights management firms.

  1. Critical business workflows will now include Cloud at multiple levels

People talk about “year of the hybrid” and “biggest year for Cloud ever”, which are both slightly over stated, but we do think this will be the year where almost every critical business process will now involve some sort of integration with a Cloud based application. We believe that 2018 will be the year where more of a company’s data will reside outside the business, rather than inside.

  1. Vendor independent Data Lifecycle Management will become a standard

Data is becoming more active, being moved around, and used, by more of the company than ever before; with a broader range of storage technologies being used too. As a result, a data lifecycle management platform (DLM) will become more important than ever before. Until this point a number of vendors have offered or included some sort of data management solution within their storage offering, but this has generally been a very vendor specific offering; with more diversity in the storage market (AFA, NVME, Object, Cloud) it might not be so easy to have your existing storage vendor be able to provide a platform that supports all of your storage ecosystem – which is where the external, independent providers come in. Lastly, let us be clear, a DLM platform is not just being able to do archive! A DLM platform needs to be able to profile your data (wherever it is), provide insights into that data, make policy based decisions on where to deploy it, and make it always available to users.

  1. Cloud Access Security Brokerage (CASB) will become critical rather than nice to have

As more critical data moves to the Cloud, securing it will become a mandatory compliance point rather than a nice to have. That’s not only data stored in SaaS based applications like but also, with platforms like O365 and Google Apps becoming the defacto standard for Email and home drive file storage and collaboration, the current / traditional security measures just don’t give the protection you need. With ransomware on the increase, and a large amount of that process starting from hacked email accounts, the ability to identify and block those types of attack as early as possible becomes critical. Realistically CASB platforms that can cope with an API based architecture offer the most effective option.

  1. AI will become the standard for all security solutions

This has started to happen over the last 12-24 months, with players such as Cylance and Exabeam becoming the visionaries in their respective markets, but 2018 will be the year where all security start-ups will include AI as part of their overall solution to drive quicker response times and deeper insights into data that has been gathered to provide more effective threat protection.

  1. S3 API will become a standard protocol for storage ecosystem

At Nephos one of the areas we specialise in is object storage and we have seen the growth of the S3 protocol over the past 24 months as a requirement within storage ecosystems, with more and more companies adopt object storage for their capacity driven data storage e.g. backups, content distribution and file storage. We’re expecting S3 to become a mandatory requirement in 2018, particularly as people seek to use a more API driven architecture to get a tighter integration between their data and their applications.

  1. SD-WAN to overtake MPLS as the first option for WAN deployments

This isn’t a huge leap of faith, with SD-WAN becoming more mainstream (even VMWare have entered the market with their recent acquisition of VeloCloud), but most organisations are still using complicated and expensive MPLS delivered by Telco’s. SD-WAN will almost certainly become more mainstream in 2018, as organistions continue to try to drive efficiency, and the concept of a fixed WAN becomes less relevant, with more services moving into the Cloud and circuit performance grows at lower cost points. There is now no reason that a company shouldn’t be looking at SD-WAN as the default choice, rather than the more traditional “safe” option of MPLS.

Notice: Undefined index: loggedin_user in /home/wwwnepho/public_html/wp-content/plugins/freshchat/widget-settings/add_to_page.php on line 97


To watch this webinar please enter your details below.


To download this form please enter the details below.

The download will open in a new window, ensure that you site allows pop-ups.