Feed on
Posts
Comments

I love doing presentations and I had the opportunity in southern Europe to present for a few customers my view on how to administer on-prem services. What I presented was Microsoft Secure Privilege Access Roadmap. If you havn´t read it please do. It gives to quite some details how you should manage the administration tasks in your environment. Focus is of course on credential theft as that is account is the new perimeter for information.

Credential theft is interesting as it comprises of many different things but one of the more important is stealing of the hash and how to prevent it. I explained how it works and the number of ways you could mitigate it. Halfway through the presentation one person in the public, CISSP-certified and known to me to have worked within the field for 15 years, raised the hand and asked with a bit annoyed voice: “This credential theft you are talking about. Is it something you recently invented? I have never heard of if before.” The room fell silent and I was stunned a few seconds trying to comprehend what was just said. Gladly the person sitting next started to whisper and within a few seconds a heard a faint. “Sorry!”

There was a bit of a laugh in the auditorium but it opened up an opportunity for me to discuss the importance of using the correct terminology when discussing security and even more important, making sure that your customer understood. I was very sure that credential theft was a known terminology but apparently not. This phrase is common in the literature but if you are spending your studying to maintain your CISSP on just a limited amount of sites you might miss the latest for 15 years. ?

SOC for clouds

During a workshop at a customer we started to discuss their SOC. Today it fully manages their onprem servers and clients but when asking about their cloud data center (Azure) it turned out that it was not managed at all beside that the security functions was activated but not used.

Getting the security functions in Azure joined with your current SOC is very important as attacks have started to emerge on Azure as well. Even if you have a good security in Azure as default there is still the possibility to be hacked if you are not careful with the management and deployment of solutions. Activating Log Analytics and have that data sent to your SOC will enable them to manage your Azure environment as well and give you better insight in your whole environment and the threats against it.

Azure is a very extensive cloud service that provides several functions and a very short ramp up-time. This is all well and good and it is possible to get a very extensive security in place quickly if you get the right licenses and services. What many companies forget is that Azure is not a fabulous simple cloud service. It is simply another datacenter with some cool functions for you to start utilising. You´ll have to treat it as a datacenter with the same control mechanisms regarding access, resource management, change etc. The full ITIL is applicable even here.

Just take a simple function as access control. Would you allow a technician at your company to manage your whole order system for new servers in your existing datacenter through a simple logon with her or his personal email with a week password? Even worse, using an unmanaged computer on an internet café somewhere ridden with trojans? Most probably not but this is the reality for many companies starting out with Azure.
When first starting with Azure, give it a few minutes thinking on how to manage the subscriptions and mimic the security model you have on prem. Trust me, it makes life a lot easier for you!

IAM is a very strong tool to get in control of your accounts. With an IAM system for all standard users you will quickly protect all standard access and manage all access control. On top of that comes the protection of your privileged accounts and that means more advanced solutions like PAW or ESAE. In essence you cannot protect a privileged account if you allow it to logon to a workstation where you access internet or check your mail no matter what protection you have.

Should IAM be used to provisioning privileged users? Yes, to some extent it could be useful, like for Tier 1 system operators, database admins etc. But for Tier 0 users (in essence Domain Administrators) you should not use an IAM as you would like to be in very tight control of your Domain Admins and also because you don’t want your IAM-system to be part of Tier 0.

You have all heard about the layered security approach and probably understood it. Sometimes it just becomes very visible how it works. I recently visited a client in southern Europe where we are delivering a high security project and as part of that project we are working in a secure room, a locked and secured room.

In the endeavors to secure the area the client decided to exchange the lock and key we currently were using to the central keycard solution with logging to make sure that we knew who was going in and out of the room. Of course, the owner of the solution was trusted to enter the room even before this. As a precaution I decided to review how the current solution was administered. This turned out to be a laugh for all of us, one of those laughs you have when you realise you should had stayed in bed instead. The administration terminal was a simple laptop standing wide open in the reception, no password and only a two-character password to log in to the system. Furthermore, the server was a desktop standing on the floor with no protection on the database, no backups and no physical protection what so ever. The computers were never patched and was running old operating systems.

Two hours later the vendor was there and received an order to replace the system the day after.

I had a chat with a friend of mine, who is an enterprise architect and a damn good one as well, regarding integration architecture vs security architecture and where the cross section. While his stand point is that integration architecture is imperative to understand how business unites should work together my viewpoint is that from an information perspective, adding a layer of information classification and collusion issues, I need to understand what information the different business units actually use to be able to apply a correct classification on the information. You who have followed me for some time know that my view on classification is that it is only an accelerator on what type of protection you should apply and the type of authentication that should be used.

I had a case a few years back regarding access control and how visitors was to be registered before being allowed into a secure building. It turned out that there was an automated approval flow that moved from classification level 3 to classification level 2 (lower classification) without anyone understanding the consequences. In this specific case this was the enablement of a social engineering and technical attack that in the end enabled me to enter the facility with a full access card as a consultant.

I received a mail recently regarding how many domain admins a company should aim for to have. Of course, this is always dependent on the structure of your company etc. but as a rule of thumb I aim for five domain administrators.

So, why five? It is actually quite easy to calculate. First of all: The domain admins should only do domain admin tasks, like manage domain admins, manage the domain controllers, manage the forest, etc. The domain admin should not bother with creating users. That task should be delegated to someone else. This means that the number of users needed shrinks quite rapidly. If you then have a 24/7 availability need we are looking at staffing for that and that is normally five persons. It´s as easy as that.

Processors are vulnerable. Who knew? Most interesting is that this is a flaw in the architecture in itself. Those types of errors tend to be harder to fix because it is part of the overall solution. Tis specific case will be interesting to follow. I don’t expect it to that much of a problem in reality. Most fixes will sort most of the problems and the solutions that actually are sensitive will be protected in other ways. Gladly we are not living in a monolith world anymore when it comes to IT.

I would say that the bigger problems resides in older systems that are not adequately protected in the first place but there are so many attacks easier to conduct so when they finally are at the level to start using meltdown as a way to get in the hardware has been changed to a secure version.

Tier 0 and GDPR

I love working with security and I´m fully aware that there is always an expert that knows the details better than I do and also another expert that knows the whole field better than I do. I can only prod along and do my best. Sometimes, however, I´m baffled by how some people are blind to the obvious just because they have worked for to long in a to narrow field.

I recently joined a Facebook group that discussed security. Albeit on a very technical level but still interesting. I got into a quite interesting discussion regarding GDPR protection solutions, mainly encryption tools and network protection tools. After a few minutes the I challenged the database security experts that wanted to implement a database encryption tool that would sort most of GDPR according to the sales person. Yes, it would give the database a lot more protection, but as it was dependent on authenticated users and they had no control over who gave whom access they were susceptible to a credential theft attack. After a few rounds they finally understood that they have to take a broader scope on GDPR (not even mentioning all the organizational and legal things they need to manage as well). However, halfway through the discussion a network security specialist gave chase and, with a very looking-down-the-nose-tone, told us all that we where all wrong, that all protocols are broken and that it is impossible to protect a database as long as you have control over the network. No matter what we came up with as suggestions the answer was always, ‘all protocols have been broken and you are all wrong because you don´t understand security as I do’. I´m the first to sign a paper that there are quite a few that know security better than me but during my 20+ years I have managed to pick up a thing or two and I know that there are encryptions that are not easily broken in 20+ years and that if you manage your networks with good account management and secure workstations it is almost impossible to break without resorting to physical threat or good old social engineering-tactics.

Still, it was an interesting experience, to try to argue with someone that has his thoughts locked in concrete. Most interesting was that what ever TLS-encryption I came up with it was flawed, but any other network encryption, he came up with was brilliant.

In the end I decided to leave the group because I really like a good solution discussion but I distaste all mine-is-bigger-than-yours-arguments. There tend to be many of those in the area of security. There are always different solutions to the same problem and some fits the client, some don´t. It is important to use the right one and not go with dogma.

Wohoo! The day has come when I turn 45 and I have still many years left to work in the most interesting of fields. There is so many new vulnerabilities still waiting to be found. I expect the next year to have focus on lower level attacks and also more stealthy attacks. On the level of operating systems and applications the cost of finding a flaw have started to rise. It is absolutely not impossible but it becomes harder by the day with more and more companies adopting secure coding methodologies. Hence, the bad guys will try to find other ways that are more persistent ways to attack a computer, as there is money to be made by owning computers.

I expect more attacks like Rowhammering or similar to surface during 2018, attacks that are so called, silicon based. Probably a specific error in a processor, a GPU or why not in the firmware of the hard drive? As those are dependent on the hardware they will be costly to fix if they ever are fixed. It would also make it possible to jump from a guest to a host in a cloud computing scenario.

I also expect more stealth because ransomware has turned harder to make money from. People are making backups to a larger degree and as you never know if you actually will receive a decryption key if you pay it is probably better to spend the monies on recovering. I expect there to be a shift from ransomware to crypto currency mining that hijacks processes and run those on 30-40% of your processors power making them not directly visible or disturbing.

Older Posts »