Feed on
Posts
Comments

Have you been asked to work from home yet? All my customer have closed down their offices more or less beside a hacked company where I will spend some time following two months. I predict that we will see a surge of attacks the following months as many companies suddenly face the challenge of protecting their devices and information outside the corporate network.

I will spend the following months looking deeper into ZeroTrust so of you have an interest of how to implement this do drop in now and then.

The first principle of ZeroTrust is to verify everything. It sounds like a simple thing but in reality it affects how you build software, use authentication, share documents etc. If we start at the development process the base is that you can never trust any data. You need to verify it, so it conforms to the format you are expecting, that it comes from a source that is verified and from an account that as the rights in the first place. It shouldn´t come out of context and it should conform to the flow of data that you expect. In the end all principles of Secure Development Lifecycle must be adhered to.

When it comes to document sharing or data sharing in general there are no safe repositories anymore. Every object needs to be protected on an individual basis. I generally refer to this as the fourth level firewall, mostly as joke. First level firewall was the physical wall. You needed physical access to the computer to access the information. Second level firewall was the network firewall, so you needed access to the network to access the information. Third level firewall was the device firewall where you needed access to the application to be able to access the information on the device. The fourth level firewall is encryption. You need to have the right credentials to access the information.

The third part of the principle is the authentication system. ZeroTrust implements an identity boundary hence the identity system becomes key to a ZeroTrust implementation. The identity system needs to be trusted and have the capability to verify the identity. The identity needs to consist of as much verifiable components that your security policy requires.

So in my previous post I started to look very briefly into the history of ZeroTrust. From that we learned that the hackers evolved into using more efficient tools that could easily penetrate the network security. The obvious goal of ZeroTrust is to strengthen the security. Without those measures the risk of being a victim increases a lot. But to be more precise ZeroTrust doesn´t replace network security as many believes, instead it adds a layer of authentication protection on top of network security. Both needs to adapt and strengthen each other. Without a common understanding of the goals the risk for an inefficient and costly security setup is not only possible but also very likely.

A second goal is more an effect of ZeroTrust: Enablement of a mobile workforce. Mobile work is happening and we will see even more in the future. With mobility comes the requirement of devices that can protect themselves on a hostile network and with the possibility to protect on a hostile network comes the possibility to work remotely.

The third goal is to enable cloud services. Why is this a goal? Well, when the users move out on internet the possibility to start using other services that are both cheaper and more secure when they are delivered as cloud services.

What is ZeroTrust? The name has its root in Jericho 2.0 (see the books downloadable from this blog here) and can be roughly translated to: ‘You can never know who roams your network so verify all access all the time. Never trust what you can´t verify.’ The implications of this affects the way we design our solutions, networks, access controls, yes, all our security setup.

If we go back to how security normally was addressed, we had an authentication system managing the ‘logons’, we probably had a antimalware of some kind but the most intricate security functions was on firewalls, IDS/IDP-systems, MAC-protection etc. The whole goal was to put as many barriers in front of the network as possible making it impossible to penetrate. This made sense during the time because malware used network vulnerabilities to spread and as updates to operating systems and applications was seldom made, at least not with the speed one would have hoped for, relying on network security was the thing to do.

But changes happened. 4 of May 2000 we all felt loved when ILOVEYOU started spreading around the world. I remember this as I was at a customer site when it struck them and the network security admin came running into the server room and pulled the plug on the Exchange server. This was not the first attack of its kind but it showed the vulnerabilities of relying on network security. This worm used user interactions to trigger, it was a simple Visual Basic script and it utilised a standard application to spread itself. It bypassed all the standard network security functions hence the attack became widespread and created a need for network filtering on a protocol level. So all traffic was to be inspected before allowed through. This of course raised challenges and created chokepoints not to mention the cost and the need for updates. Outsourcing the service to an external provider that could apply faster updates and use bigger hardware quickly became the solution.

Moving forward the hackers saw that tricking the user was the easiest way to circumvent all network protection and the birth of phishing was a fact. With phishing came the possibilities to trick the user to give out the credentials. Soon came the possibilities to use remote control of the computers and finally the weaponizing of hash harvesting. Network security has died.

This might be interesting. A few hours ago I was contacted by a company that is providing consultancy within the automotive business. Apparently they have received a request for cybersecurity in car development and that is a completely new skillset for them so they have reached out to me to check if I´m the right person.

So what is cybersecurity in a car? Frankly, it is nothing out of the ordinary: secure development, threat modelling, secure infrastructure, key management and that´s about it. Anyone working deeply in the automotive industry will disagree with me but from what has been presented to me this looks like more or less it. If I´m offered the assignment I will most probably accept just for the chance to learn some more details.

I expect there will be several challenges that I havn´t encountered in years like thinking about processor usage, size of applications, optimisation etc. ?

It is very interesting to see what happens when legal gets involved and starting reading paragraphs to the sourcing provider. Apparently we are now allowed to do more or less anything we want as long as we don´t make changes to service accounts or restart the servers.
We have just deployed Azure ATP at the premises to get some understanding what is happening with all the domain admin accounts. We have killed off all accounts that where personalised and currently we are running with just five accounts that are heavily monitored. It is amazing for my customer to see what the sourcing provider actually is doing and be able to monitor what they do according to the contract just be being able to capture where they logon compared to the tickets they receive.

Not to be frank but their service provider is most probably not up for renewal next year. Monitoring of accounts sure is valuable.

Following the discussion with legal after my previous post we have got some guidance to move forward. Apparently this was a common business practice from the service providers side to minimise cost. When challenged by the legal department they quickly became more accommodating in helping us. This is something to take note of. Never allow a service provider to dictate your security practices.

Right now we have found out that the domain admin accounts we were investigating for suspicious behaviour wasn´t personalised as was in the contract but due to cost management they where used as group accounts on standard laptops, not on dedicated workstations as in the contract.

Moving back to legal there sure are going to be changes here. I´m not too keen about being the bearer of bad news but then again, I´m not to keen about not speaking out when I see something that put my customers at risk.

More to come for sure.

Welcome to 2019, the year when we are supposed to know what we are doing. I´m currently experiencing an interesting situation with a customer. They have outsourced their AD to a service provider and right now I´m helping them to investigate a rather simple problem: What servers are using unsigned LDAP. There are a bunch of reports readily available in Active Directory and there is a lot more information to get by running a few tools. This is no big deal and normally this would have been done in about 8 h and then reporting back they day after.

Right now we are stuck on our third day trying to answer very detailed questions from the service provider what the exact impact will be on the servers. We are almost down to the level of measuring processor usage of the tools. My customer is used to this but I have started to question why we even are doing this. After some digging in the contracts I have found out that the service provider has a strict SLA and everything that isn´t running under a change request that takes the server totally out of their responsibility, including all servers that are affected by this server, will be challenged indefinitely. So any change you want to do on Active Directory means that all servers connected to Active Directory will be included in the change request putting most of the server park in maintenance mode and that is not covered by the standard contract creating an extreme cost for my customer.

Today we just gave up for now and have asked the legal department for advice.

Bolted on

What did my friend actually mean with bolted on? For sure he means a security solution that might or might not be well integrated into the operating system and even if that is a big issue in itself the real challenge was that the user interfaces sometimes mandated some serious training to be able to use making the cost of using the solutions a lot higher than solutions using a familiar interface that looks like the operating system or other applications built for the operating system.

Is this really a security problem? Yes, because it consumes resources that could be better spent to provide a higher degree of security. Instead of having a team of eight he needs a team of ten just to cope with a few solutions that uses non-standard interfaces and non-standard integration.

So first thing when he´s back from vacation is to replace those solutions with some that is better integrated with the rest of his tools. Best of suite beats best of breed.

Happy new year

It is somewhere between late night and early morning. Family has stopped celebrating, the bottles of champagne are empty and everyone is sleeping. Only the security architect is awake.

During the festivities I had a long discussion with a friend of mine of the futility of cybersecurity. How hopeless it is to try to stay on top. Either you loose within a few hours or you spend hundreds of thousands of Swedish Kronas on consultants that only implement security solutions that you don´t understand and have challenges to operate afterwards. His simple question to me was: How on earth am I supposed to come out on top of this?

I had to give it some thought as he was actually pointing to that my work didn´t provide a value to a company. After some clarifications, there was champagne involved, I understood that it wasn´t my work in particularly that was the problem but all those non-standarised technical solutions that was challenging to integrate and operate that was the problem.

We boiled it down to three larger problems:
• bolted on rather than built in, meaning that the user interface was not standarised so that the staff needed to train specifically how to navigate
• siloed solutions, so an event in one tool was challenging to correlate with another albeit a well working SIEM solution with trained staff
• measuring the effectiveness, they bought tools others bought but it was hard to show effectiveness as the only way to prove was conducting expensive pentests that still would find another way through the defenses

We didn’t find any good solutions this night and when the Gin and Tonic was served we forgot about it but now I´m sitting here thinking about it and I might have and idea moving forward. Drop in here in a few days I I´m sure I have cracked a few bright ideas. Until then: Happy new year!!!

Older Posts »