Server Hacked

A little bit off topic but I know that many readers run their own servers. My experience with a system compromise might help those bloggers avoid wasting a day and more cleaning up the mess.

0) Background

System running Debian Squeeze with Apache / Mysql / Exim4 / Dovecot all from the stable build. No package more than a week out of date.

Also running were WordPress (various versions 2.5 – 2.8), Dokuwiki and Webmin. RKHunter was installed and running daily.

1) First Signs

Most obvious sign something wasn’t right was the system running out of memory around 15:50 followed by unexpected cron messages (see below). Unable to login I forced a reboot and inspected the apache logs. It appeared the system was being accessed by several bots and I thought that it had been overloaded by a badly behaved bot.

The cron message I was receiving was:


Subject: Cron f Opyum Team

change

mv: cannot stat `/usr/bin/ses’: No such file or directory
mv: cannot stat `/bin/mr’: No such file or directory
/usr/sbin/change: line 6: mr: command not found

I was getting these every few minutes but several many were truncated. Suspecting some sort of attacked I looked for information online and found only one mention and that contained no useful information.

Running RKHunter showed no problems.

The lack of information, lack of warnings for RKHunter and my own lack of time meant I did not pursue this until the next morning.

2) Compromise

Next morning my inbox was full of identical cron messages though the server seemed to be serving websites correctly.

Running RKHunter revealed nothing and then I finally thought to check my crontab and found an entry:


* * * * * root f Opyum Team

Attempting to remove the job, I found that even as root I no longer had permission to change crontab. I identified the command ‘f’ in the /bin directory and but was unable to mv, delete or chmod it.

Running ‘ps -x’ revealed a lot of (hundred+) processes called ‘i’

Whilst killing processes and trying to regain control of the machine I noticed a reference a process referencing:


http://ilove15.selfip.com/~ilove21/pictures/i.jpg

Unfortunately, at the time I made no further notes and could not remember which process was responsible.

3) Investigation

Looking back through the logs after the machine has been rebuilt, the first sign of anything wrong I can find is in the syslog:


Mar 31 13:56:01 owl /usr/sbin/cron[6731]: (*system*) RELOAD (/etc/crontab)
Mar 31 13:56:01 owl /USR/SBIN/CRON[31937]: (root) CMD (f Opyum Team)
Mar 31 13:56:13 owl kernel: device eth0 entered promiscuous mode
Mar 31 13:57:01 owl /USR/SBIN/CRON[32008]: (root) CMD (f Opyum Team)

The cron job repeats every minute or so and then:


Mar 31 14:22:01 owl /USR/SBIN/CRON[1464]: (root) CMD (f Opyum Team)
Mar 31 14:23:01 owl /USR/SBIN/CRON[2035]: (root) CMD (f Opyum Team)
Mar 31 14:23:10 owl kernel: nf_conntrack: table full, dropping packet.
Mar 31 14:23:10 owl kernel: nf_conntrack: table full, dropping packet.
Mar 31 14:23:12 owl kernel: nf_conntrack: table full, dropping packet.
Mar 31 14:23:12 owl kernel: nf_conntrack: table full, dropping packet.

with the occasional:

Mar 31 14:23:21 owl kernel: __ratelimit: 17 callbacks suppressed

which I’m guessing is caused by the machine becoming overloaded / running out of memory.

Around 15:50 I noticed the problem, rebooted the machine and after which I get:

Mar 31 20:45:07 owl kernel: __ratelimit: 344 callbacks suppressed
Mar 31 20:45:07 owl kernel: nf_conntrack: table full, dropping packet.
Mar 31 20:45:07 owl kernel: nf_conntrack: table full, dropping packet.
Mar 31 20:45:07 owl kernel: nf_conntrack: table full, dropping packet.

repeated until I looked at the machine again at 9am the next morning.

Examining the auth.logs I found:

Mar 31 06:43:06 owl sshd[25067]: Did not receive identification string from 125.67.235.186
Mar 31 06:50:26 owl sshd[25114]: Invalid user salina from 125.67.235.186
Mar 31 06:50:26 owl sshd[25114]: pam_unix(sshd:auth): check pass; user unknown
Mar 31 06:50:26 owl sshd[25114]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.67.235.186
Mar 31 06:50:28 owl sshd[25114]: Failed password for invalid user salina from 125.67.235.186 port 58653 ssh2
Mar 31 06:50:33 owl sshd[25116]: Invalid user inger from 125.67.235.186
Mar 31 06:50:33 owl sshd[25116]: pam_unix(sshd:auth): check pass; user unknown
Mar 31 06:50:33 owl sshd[25116]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.67.235.186
Mar 31 06:50:34 owl sshd[25116]: Failed password for invalid user inger from 125.67.235.186 port 34703 ssh2
Mar 31 06:50:44 owl sshd[25126]: Invalid user adis from 125.67.235.186

Seeming to switch between random user name and attacks on root:

Mar 31 06:53:38 owl sshd[25175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.67.235.186 user=root
Mar 31 06:53:40 owl sshd[25175]: Failed password for root from 125.67.235.186 port 41105 ssh2
Mar 31 06:53:43 owl sshd[25177]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.67.235.186 user=root
Mar 31 06:53:45 owl sshd[25177]: Failed password for root from 125.67.235.186 port 46377 ssh2
Mar 31 06:53:48 owl sshd[25179]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.67.235.186 user=root
Mar 31 06:53:50 owl sshd[25179]: Failed password for root from 125.67.235.186 port 50747 ssh2

Checking the logs shows that this was being attempted at least from the 27th of February from a variety of ip addresses. Shortly before the time the system was definitely compromised (13:56:01), the nature of the attacks changed.

Mar 31 13:34:24 owl sshd[30703]: Address 93.186.118.171 maps to ns1.ozgurbilisim.org, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Mar 31 13:34:24 owl sshd[30703]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=93.186.118.171 user=root
Mar 31 13:34:26 owl sshd[30703]: Failed password for root from 93.186.118.171 port 58352 ssh2
Mar 31 13:34:27 owl sshd[30705]: Address 93.186.118.171 maps to ns1.ozgurbilisim.org, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Mar 31 13:34:27 owl sshd[30705]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=93.186.118.171 user=root
Mar 31 13:34:28 owl sshd[30705]: Failed password for root from 93.186.118.171 port 58913 ssh2

After this point there are only attacks on the root account. Other user names are not tested. Twenty minutes later, it successfully gained access.

Mar 31 13:55:06 owl sshd[31859]: Address 93.186.118.171 maps to ns1.ozgurbilisim.org, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Mar 31 13:55:06 owl sshd[31859]: Accepted password for root from 93.186.118.171 port 37209 ssh2
Mar 31 13:55:06 owl sshd[31859]: pam_unix(sshd:session): session opened for user root by (uid=0)
Mar 31 13:55:13 owl sshd[31889]: Address 93.186.118.171 maps to ns1.ozgurbilisim.org, but this does not map back to the address - POSSIBLE BREAK-IN ATTEMPT!
Mar 31 13:55:13 owl sshd[31889]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=93.186.118.171 user=root

It continued trying unto 14:36 when it mostly stopped and login attempts were only occasionally tried.

Approximately 600 attempts were logged between 13:34 and 13:55. Whilst the root password was not the strongest in the world it was a non-English, non-obvious word. Either:

  • a) The brute force attack got very lucky.
  • b) The logging is incomplete and many more were tried.
  • c) This was the outcome of a single, long running attack going back over a month.
  • d) The attackers had some knowledge or other technique to narrow their attack.

A search of other logs did not reveal any suspicious activity around the time of the breach.

4) Lessons Learnt by an Occasional Sys Admin with Limited Time and Resources

Most of these things need investigation. Any suggestions of solutions are welcome.

1) Better logging and monitoring is required. Though given the attack seems to have taken less than 30 minutes, I’m not sure how much this would of helped.

2) Tighten up SSH somehow to slow down or prevent brute force attacks.

3) Better passwords – Done. Though the problems of remembering random longer password forces me to write it down so not an ideal solution.

Thanks

Many thanks to the Bytemark team for so quickly getting me a rebuilt machine and their overall support. I cannot recommend them highly enough for anyone looking for a host.

Also thanks to everyone on Twitter and Facebook for their help and support.


Image Credit – Server Room by Torkild Retvedt – CC-BY-SA-2.0

 

11 comments

  1. Yikes, sorry to hear about that Chris. There’s always a sinking feeling when something like this happens.
    As it stands now though, glad it seems everything is back up and running again.

  2. You should install denyhosts:
    “””DenyHosts is a program that automatically blocks SSH brute-force attacks by adding entries to /etc/hosts.deny. It will also inform Linux administrators about offending hosts, attacked users and suspicious logins.”””

  3. Hi Chris,

    Found the exact results this morning. Word for Word.

    How did they manage to add a cron job without gaining access?

    1. They gained access via brute forcing the root password. Once they are in as root, they can do whatever they like.

      Check your auth.log. You’ll probably find a successful root log in at a time that certainly wasn’t you.

      Chris

  4. Solution (for debian users):

    apt-get install –reinstall e2fsprogs
    chattr -i /bin/f
    rm /bin/f

    chattr -i /etc/crontab
    Edit /etc/crontab file and remove the following line:

    * * * * * root f Opyum Team

    And that’s it 🙂

  5. I just cleaned up a system that was hit by the same attack. you should also this stuff.

    ssh/sshd was trojaned – backups stored in /etc/rpm

    /etc/ssh/sshd_config was modified

    /bin/bash appeared to be modified

    check /var/tmp for hidden files

    # find /var/tmp
    /var/tmp
    /var/tmp/
    /var/tmp/ /…
    /var/tmp/ /…/screen.tgz
    /var/tmp/ /…/mfu.txt
    /var/tmp/ /…/pass_file
    /var/tmp/ /…/500.php
    /var/tmp/ /…/xxx.txt
    /var/tmp/ /…/vuln.txt
    /var/tmp/ /…/f
    /var/tmp/ /…/screen
    /var/tmp/ /…/i
    /var/tmp/ /…/s

  6. Here are a few mitigation ideas.

    1) use a secure password(seems so simple I know)

    2) block external SSH with iptables or other firewall if available

    3) use TCP wrappers – add sshd:ALL to /etc/hosts.deny and only allow trusted hosts/networks in /etc/hosts.allow

    4) disable direct root login capabilty in /etc/ssh/sshd_config and or PAM

    5) configure SSH to only allow public key authentication

    6) setup egress firewall rules – dont allow outbound traffic to ports 22, 25, 80, 443, 8080, etc…

    7) remove or limit the use of the following programs, gcc, php, perl wget, curl, etc..

  7. @Frank,

    Thanks for taking the time to share the information. It will be a great help to others struck by the same attack.

    Chris

  8. Hi,

    I have the same symptoms on my customer’s server (Redhat EL). I found /usr/sbin/t.txt file containing list of IP address, whois point to range of ISP in the US and the UK, attack method using ssh (22) since this server can ssh to the Internet.

    Firewall indicate this as DDOS.

    It is not an Internet facing server, most login attempt uses root id with no failed attempt, guess what? the administrator use very known password (P@ssw0rd). What I have now only the wtmp file (in binary format) which at least record local IP address that ssh to this server. Most probably the attacker found a weak internet facing host then jump to this attacked linux.

    Recovery: reformat and do hardening.

Comments are closed.