Bořivojova 878/35, 130 00 Prague
+420.734 463 373

Statement on the attack from September 14, 2020

Hacker

Dear customers,

On September 14, 2020, our company Gransy s.r.o. experienced a significant failure of its services. Unfortunately, this outage shut down some of our services for almost 14 days. Considering that a lot has been written about the outage on various portals and in several media, and due to the fact that no editors have asked us for an opinion or comment, we would like to explain what happened and provide information about the attack as we have experienced one. We would therefore like to describe now how the whole incident took place and how it impacted the company’s operations.

What happened

Just before 8am on Monday, September 14, 2020 a very serious cyber incident occurred when the attacker deleted data on key elements of the infrastructure, i.e. all system configurations and system backups. According to the analysis of the attacker’s movement on the servers and current investigations the attack was conducted with the aim of causing maximum damage. The goal of the attack seems to be a liquidation of our company. The force, type, and way the attack was carried out are completely out of line with anything we have encountered during the existence of our company. This was confirmed by the experts, whom we have consulted and who analyzed the whole incident. The attacker`s desire was to mask the real event as Ransomware, with encrypting some unimportant files. The real damage was done by overwriting and erasing data from our systems which is not the nature of Ransomware attack.

The attack limited our client services as well as our internal systems which led to the unavailability of not only our customer support but also of the company’s management. As a result we as a company could not officially comment on any media potential queries sent at the time of the event. We assume that this was also the cause of many speculations that have appeared in various articles and discussions in recent days. We would like to put the most serious ones in perspective.

1) Leak of Auth – info codes (domain transfer codes)    

The leak of Auth – info code could not and did not occur because no Auth – info codes are stored in our databases. The assumption that probably led to this is the fact that the Auth – info code can be seen in the domain detail in the client interface. However, this is a dynamic retrieval of information from the registry which takes place only at the moment of displaying the domain detail by a current, uncached query to the registry. The second step that could mislead this assumption in certain circles was the action of the .CZ registry (CZ.NIC) which preventively regenerated all Auth – info codes. Of course, we acknowledge this precautionary measure to the registry but no incidents have occurred in this regard.

2) Ransomware from employee email       

This speculation is not based on truth and given the nature of our system it could not have happened. The application layer to which employees have access and which also handles incoming mail is web-based and completely separate from the server layer so there is no way for an attacker to get to the server via the interface.

3) Data leak from the server    

Due to the data traffic and the period of time the attacker spent on the servers there is no indication that client data was stolen from the server. We are well aware of our data traffic based on information from our connectivity providers, i.e. completely independent elements located outside our infrastructure.

The length of the attacker’s access to the servers as well as the extent of the damage done is known and the servers were disconnected from the mains as soon as possible after the initial analysis of what happened.

4) Hidden Trojan horse on servers    

This speculation is fortunately wrong as all servers were restored from external backup and DB fragments on clean server installation – all servers were reinstalled, there was no restoration of the original system even where the system was not affected. Also, the restoration was carried out with the aim of making the entire infrastructure more resilient to avoid a recurrence of a similar attack in the future.

What were the consequences of the attack?

The consequences of the attack on our company were considerably devastating. The attacker managed to eliminate all the infrastructure of Gransy s.r.o. and all related projects which led to the unavailability of all services to our end clients.

“If I were to compare an attack to something from the real world I would choose, as an example, an arsonist who burns down the production line, offices, canteen, fleet and warehouses of one company, that is, everything the company owns.”

During the attack and ongoing communication we were confronted with various views from which we would like to comment primarily on one thing – the redundancy of technology. Paradoxically, for us, a fire in a data center that would literally “lie in ashes” would have significantly less consequences than the situation that had arisen. Our infrastructure was and is distributed between two geographically distant data centers (Prague 3, Prague 10) in the Czech Republic and two data centers in Germany, about 200 km apart.

The recovery process

After determining the extent of the damage we began to build the entire company from the various available artifacts and off-line backups. We managed to build Subreg.cz from scratch in 28 hours and other services followed in quick succession. In total more than 60 servers were affected, most of which are supported by follow-up services, which as a whole represent what our company is.

Services are redefined according to a pre-approved emergency plan. The highest priority for us was the deployment of services with the greatest impact on clients, followed by our internal services. However, this renewal also includes many related services which may not have a direct impact on the client but they are important for the operation of client applications as such and so on, so it seemed that sometimes nothing was being done (and we were often blamed for it in communication), however, the opposite was true: we worked almost NON-STOP all the time, only with coffee breaks, food, and some fresh air, sleep was almost forbidden for us. There are individuals among us who have slept 16-24 hours in the last 14 days. A series of operations is also burdened by their duration, which even in the best set plan simply cannot be influenced – typically data transmissions consisting of a very small files when it is simply a technical limitation that cannot be overcome here or anywhere else.

Sometimes, however, even the best plan has a crack so in our company we observed the need of improvement in regard to our crisis stakeholder communication. For example, the infrastructure of our company’s customer center has not been completely restored until now, and unfortunately, communication plan B did not exist. Our Facebook which proved to be completely unsuitable for crisis communication thus became the main communication channel. Due to the number of questions which multiplied in various ways in the comments, and the frequency of questions which came in the form of reports, it was unfortunately impossible for our support to answer in a timely and quality manner. In general, it is simply impossible to provide a relevant answer immediately at a given moment in the event of a major outage, however, we understand the customer is requiring this answer – there are also moments when there is no time estimate to recovery available, simply because it is not possible to predict the completion of a specific troubleshooting task or the recovery status of the related services. We coordinated different personnel schedule in order to ensure non-stop essential availability in terms of the implementation of the restoration, i.e. so that there would be no downtime.

Our lesson

The whole situation is something completely new for us, none of us has experienced it during the years we have been working in the industry, and two lessons are being learned from it.

First, we will work on crisis communication and the creation of a completely independent communication system with our customers which would be able to fully establish the primary communication channels in the event of an unexpected occasion.

Our main priority is that this communication system will never have to be replaced by a social network in the future, and information will be easier to reach even by customers who do not use social networks.

Secondly, we will fundamentally change the security structure of our infrastructure so that a similar incident cannot be repeated. We will inform you about the specific measures we are preparing separately – we are working on them in collaboration with experts in the field in order to consider and simulate risks that are difficult to predict.

We would like to thank all of our customers for their patience and trust.

Martin Dlouhy

COO Gransy s.r.o.

Related Posts

Leave a reply

You must be logged in to post a comment.