Invariant Properties

  • rss
  • Home

Better Ad Blocking Through Pi-Hole and Local Caching

Bear Giles | August 26, 2018

Ethical consideration

In an ideal world we would not need to block ads. It hurts small sites that depend upon the revenue to offset hosting and network expenses. These sites are giving me something of value (or else I wouldn’t visit them) and it’s not unreasonable for them to ask for a little in return. Many people rush into ad blocking without considering whether it harms others.

However there’s three serious issues with current ads.

They are often poorly written. Specifically, some ads leak massive amounts of memory – so much that I’ve had my browser crash several times per day. The crashes stop after I turn on javascript and ad blocking. It’s not hard to conclude what was causing the crashes. I’ve tried a less-intrusive approach – disabling javascript – but it breaks some sites despite whitelisting (I’m looking at you, gotomeeting), and some ads with memory leaks still get through occasionally. Maybe browsers will allow us to specify the max. amount of memory used by a webpage someday… but they don’t today so crashes due to bad javascript are a risk.

They may be malware vectors. It’s not common but there have been several instances of ads carrying malware. Again javascript blocking helps but it only protects my desktop. My family’s phones, tablets, and other computers (since I can’t expect others to deal with the hassles of knowing when to whitelist javascript sites) are unprotected.

They’re ineffective. This refers to the harm to the advertiser, not the site owner. I can’t remember ever buying anything through an ad on a website. I can only remember clicking through an ad once in the last few years, and it was from a ‘WTF?’ moment where I wanted to see if the site was actually advertising what I thought they were advertising. I was definitely not their target audience. I don’t want to say I would never buy something though an online ad… just that I’m clearly not their target audience. Is it fair for the advertiser to pay for an ad that will always be ignored?

Many people also cite the proliferation of ads on some sites, ads that are increasingly aggressive at catching the reader’s attention. I don’t consider that an issue that justifies ad blocking since there’s an easier remedy – vote with your feet. Don’t go to sites that are predominately ads. However to be fair this is easy for me to say since I’m rarely interested in the content of those sites – I mostly come across them when following ‘WTF?’ comments from others.

Common solutions

There are two common solutions. The first is disabling javascript, e.g., with “safe script”. It allows ads to be displayed while defanging them. It’s good in theory and whitelisting sites is normally just a moment’s work but it has costs. Some sites don’t work unless it’s disabled, e.g., gotomeeting, even if I ‘trust’ the domain. (I suspect the landing page redirects to a second page that pulls in javascript from sites that I don’t know to whitelist.)

In this case it’s obvious that the site is failing but in other cases the failure is more subtle and easily overlooked. E.g., some sites hide links and buttons by default and use javascript to make them visible, or it’s required to perform an action. In some cases it’s obvious, e.g., you fill out a form and there’s either no submit button or it doesn’t do anything, but sometimes it’s more subtle and you don’t notice the problem unless you happen to visit the site with javascript blocking turned off.

The second is ad blocker browser plugins like ‘Ad Block Plus’. It works but I’ve noticed significant delays while the status bar saying that the browser is waiting for a response from the ad blocker site. This seems most common when recovering from a browser crash – I usually have a lot of tabs open and browsers throttle the number of concurrent connections to any particular site – but I’ve seen it at other times as well.

There has to be a better solution.

Pi-Hole (Ad Blocker)

The first half of the solution is Pi-hole. It is a small app designed to run on a Raspberry pi (hence the name) although you can also run it on a micro-instance at AWS or Digital Ocean. It works as a caching DNS server with a blacklist. The sites on the blacklist resolve to the pi-hole server which responds with a simple page.

You will get the best performance if it’s running on a Raspberry pi on your home network. (Be sure to configure it with a static IP address, not a DHCP address assigned by the router, in this case.) You will have the most flexibility if it’s running on a micro-instance at a cloud provider – you can access the server while away from home. There’s no reason why you can’t do both – list the Raspberry pi on your home network as your primary DNS server and the cloud-based server as your secondary DNS server.

A good static address for a Raspberry pi is 192.168.1.53 (or anything ending in .53) since the standard DNS port is 53. Make sure whatever you pick is outside of your router’s DHCP address range.

Installation

These instructions assume you’re running either Raspbian (on a Raspberry pi) or Ubuntu (on a cloud instance.)

  • Install curl. apt-get install curl.
  • Install pi-hole. curl -sSL https://install.pi-hole.net | bash

You’ll be prompted for a few configuration details and eventually end up with a dialog page that shows the ‘admin’ password. Write it down.

Normally I would stay far, far away from running something downloaded directly through bash. I made an exception in this case since it’s a dedicated system with limited resources. It’s not hard to download the script into a local file for review before running it.

Assuming you’re using the static address mentioned above your admin page is at http://192.168.1.53/admin. The dashboard will show basic statistics (number of sites looked up, number of sites blocked up, how often it hit its cache, etc.). My server has only been up for a few hours so there’s been relatively few queries.

Another pages will list the actual DNS queries. You can use this to verify that you’re hitting this server for your DNS queries. Or to catch your kids going somewhere they shouldn’t go!

If your router supports user-specifed DNS servers you should add this address as your primary DNS server. IMPORTANT: keep your ISP’s DNS server (or an alternative DNS server) as the secondary or tertiary DNS server. This will prevent you from being effectively losing internet access if something goes wrong with your pi-hole server.

Your systems should automatically pick up the new DNS servers as their DHCP leases are renewed. This may take several days. On Ubuntu you can try to force the issue by either manually releasing the DHCP lease

  1. $ sudo dhclient -r
  2. $ sudo dhclient
$ sudo dhclient -r
$ sudo dhclient

and blowing the dnsmasq cache by cycling the network manager

  1. $ sudo service network-manager restart
$ sudo service network-manager restart

However it didn’t seem to have any effect when I did this.

Local caching (dnsmasq)

Linux systems, or at least Ubuntu systems, use dnsmasq for their DNS lookup. It is configured via the DNS settings provided by the DHCP server on your router. It does not perform caching by default.

It is fairly straightforward to enable caching. This isn’t particularly important when you’re using pi-hole on a Raspberry pi since network connectivity should never be a problem but it can be helpful if you’re using a cloud provider.

Instructions

  • Create the file /etc/dnsmasq.d/localhost. This tells dnsmasq to provide DNS service on the localhost address.
    1. localhost=127.0.0.1
    localhost=127.0.0.1

    Note: dnsmasq is already listening to 127.0.0.53. This won’t change that. I think (but am not certain) that one interface will provide caching and the second will always hit the upstream DNS server.

  • Edit /etc/dhcp/dhclient.conf. We want to prepend our local cache. We can also explicitly add our pi-hole server to ensure that we always hit the pi-hole regardless of the router settings.
    1. #prepend domain-name-servers 127.0.0.1;
    2. #require subnet-mask, domain-name-servers;
    3. prepend domain-name-servers 127.0.0.1,192.168.1.53;
    4. require subnet-mask, domain-name-servers;
    #prepend domain-name-servers 127.0.0.1;
    #require subnet-mask, domain-name-servers;
    prepend domain-name-servers 127.0.0.1,192.168.1.53;
    require subnet-mask, domain-name-servers;
  • Restart the network manager.
    1. $ sudo service network-manager restart
    $ sudo service network-manager restart
  • .

    You can now perform two queries to verify caching has been enabled.

    In the first case there’s no entry in the local cache so we hit the upstream server. This has a slight delay since I hit my Digital Ocean instance, and it in turn has to hit its upstream provider.

    1. bgiles@eris:/etc/dhcp$ dig google.com
    2.  
    3. ; <> DiG 9.11.3-1ubuntu1.1-Ubuntu <> google.com
    4. ;; global options: +cmd
    5. ;; Got answer:
    6. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9959
    7. ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    8.  
    9. ;; OPT PSEUDOSECTION:
    10. ; EDNS: version: 0, flags:; udp: 65494
    11. ;; QUESTION SECTION:
    12. ;google.com.            IN  A
    13.  
    14. ;; ANSWER SECTION:
    15. google.com.     166 IN  A   216.58.218.238
    16.  
    17. ;; Query time: 56 msec
    18. ;; SERVER: 127.0.0.53#53(127.0.0.53)
    19. ;; WHEN: Sun Aug 26 09:21:47 MDT 2018
    20. ;; MSG SIZE  rcvd: 55
    bgiles@eris:/etc/dhcp$ dig google.com
    
    ; <> DiG 9.11.3-1ubuntu1.1-Ubuntu <> google.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9959
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 65494
    ;; QUESTION SECTION:
    ;google.com.			IN	A
    
    ;; ANSWER SECTION:
    google.com.		166	IN	A	216.58.218.238
    
    ;; Query time: 56 msec
    ;; SERVER: 127.0.0.53#53(127.0.0.53)
    ;; WHEN: Sun Aug 26 09:21:47 MDT 2018
    ;; MSG SIZE  rcvd: 55

    In the second case there’s an immediate response since the value is in the cache.

    1. bgiles@eris:/etc/dhcp$ dig google.com
    2.  
    3. ; <> DiG 9.11.3-1ubuntu1.1-Ubuntu <> google.com
    4. ;; global options: +cmd
    5. ;; Got answer:
    6. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33917
    7. ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    8.  
    9. ;; OPT PSEUDOSECTION:
    10. ; EDNS: version: 0, flags:; udp: 65494
    11. ;; QUESTION SECTION:
    12. ;google.com.            IN  A
    13.  
    14. ;; ANSWER SECTION:
    15. google.com.     164 IN  A   216.58.218.238
    16.  
    17. ;; Query time: 0 msec
    18. ;; SERVER: 127.0.0.53#53(127.0.0.53)
    19. ;; WHEN: Sun Aug 26 09:21:48 MDT 2018
    20. ;; MSG SIZE  rcvd: 55
    bgiles@eris:/etc/dhcp$ dig google.com
    
    ; <> DiG 9.11.3-1ubuntu1.1-Ubuntu <> google.com
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33917
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 65494
    ;; QUESTION SECTION:
    ;google.com.			IN	A
    
    ;; ANSWER SECTION:
    google.com.		164	IN	A	216.58.218.238
    
    ;; Query time: 0 msec
    ;; SERVER: 127.0.0.53#53(127.0.0.53)
    ;; WHEN: Sun Aug 26 09:21:48 MDT 2018
    ;; MSG SIZE  rcvd: 55

    Configuring VPNs

    Finally we can explicitly add these DNS servers to our VPN settings. This covers us even if our VPN provides its own DNS settings when you connect to it.

    If you’re running your own VPN server you can edit your /etc/openvpn/server.conf file to push your pi-hole and backup DNS servers. This means you’re covered even if you don’t modify your network manager settings. IMPORTANT: if you use this VPN while away from home you will want to point to a pi-hole running on a cloud provider instead of your home network.


    If you do this remember to reload the settings. sudo service openvpn reload.

    A final comment on caching

    Wrapping up I want to add a final comment on the caching. Caching DNS entries is dangerous if done improperly. IP addresses may not change often but they do change and it’s important to recognize that.

    Fortunately there’s a solution to this. All DNS records have a time-to-live (TTL) value – it’s basically a guarantee that the entry won’t change with in the next TTL seconds. There are benefits to both brief TTL (e.g., 5 minutes) and long TTL (e.g., a week). The cache does not harm if it uses the TTL as guidance for how long to keep a value in the cache. Some caching DNS servers will continue to provide stale information if the upstream DNS server(s) becomes non-responsive, others will immediately discard the value.

    This is why so many of the entries above were forwarded. The responses are cached but have a short TTL. Keep that in mind when looking at the dashboard and logs and evaluating whether this is worth the effort.

Comments
No Comments »
Categories
security
Comments rss Comments rss
Trackback Trackback

Setting Up Multi-Factor Authentication On Linux Systems

Bear Giles | November 23, 2017

In my last article, Setting Up SSH Identity Forwarding on Jump Hosts, I discussed how to improve security by eliminating the need to have SSH keys on your jump host by passing through SSH identity information. There’s no reason for your servers to have private SSH keys on them[1] – and that means one less thing to worry about falling into the hands of an attacker with access to either your system or your backup media.

Using SSH identify forwarding is also much more convenient than having to reenter a password every time you hop from your jump host to a server but that was a secondary concern.

SSH identify forwarding has one serious drawback – ssh-agent still has a copy of your key on the jump host while you’re logged in. Your risk exposure is much lower than if you kept your private key on the jump host but it’s not eliminated.

There’s a solution to this: multi-factor authentication. An attacker with just your SSH identify information will still be unable to connect to the other systems.

The setup process is straightforward.

  1. Install libpam-google-authenticator on the server.
  2. Create the MFA key using google-authenticator -l ‘name@system’ where ‘name@system’ is meaningful to the user. This will display a scannable image on the console, the recovery codes, and the BASE-32 encoded key. It will also create a .google-authenticator file containing this information.
    • Scan the image with one or more smart devices with the Google Authenticator app (or something equivalent).
    • Add the BASE-32 encoded key to a “one time password (OTP)” field in 1Password (or something equivalent).
    • Copy the .google-authenticator file to a safe location.

    The PAM module uses the presence of this file as a marker of which users require MFA. You do not have to require MFA for all users although it would be a good idea.

  3. Configure libpam-google-authenticator in /etc/pam.d. You want to add the following line to the appropriate files (e.g., login). During initial deployment add
    1.    auth required pam_google_authenticator.so nullok`
       auth required pam_google_authenticator.so nullok`

    Once all users have been set up change the entry to

    1.    auth required pam_google_authenticator.so no_increment_hotp
       auth required pam_google_authenticator.so no_increment_hotp

    This module has several other options, see the documentation for details.

  4. Edit /etc/ssh/sshd_config and make the following changes:
    • ChallengeResponseAuthentication yes
    • AuthenticationMethods publickkey,keyboard-interactive
  5. Test the new configuration. Do not log out of the recovery SSH session until you have verified that you can access the system with the new configuration. See note below if you use encrypted home directories.

You can deploy the ~/.google-authenticator file to multiple systems. The benefit is that this is easier to automate and only requires the user to track a single number – a major consideration if you’re using hardware fobs – but the drawback is that an attacker with access to the .google_authenticator file on one system will be able to get the MFA value for any system using the same file. A reasonable balance may be using the same file for servers in the same role (e.g., all database servers, all appservers, etc.) but different files for each category. That means someone with full access to the appservers will still be unable to access the database servers. This isn’t an unreasonable burden for sysadmins using an authenticator app or password manager but still require multiple hardware fobs.

Cautions

This is not a panacea. An attacker with shell access or backup media can see your .google_authenticator file. You might want to move the file from the default location in order to prevent attackers from finding the file in its default location using scripts. See the libpam-google-authenticator documentation for details.

Encrypted home directories can leave you unable to access your system. This won’t be obvious at the time if your recovery session uses the same account as the account you’re editing. The solution is to move the location of the .google_authenticator file outside of your home directory. See the libpam-google-authenticator documentation for details.

A misconfiguration can leave you unable to access your system. Always have an established recovery session in a root shell before attempting this. Don’t count on being able to run ‘sudo’ – it’s possible for a misconfiguration to leave you unable to run sudo in your recovery session. (Ask me how I know.) Ideally deployment should be automated using a well-tested puppet or ansible script.

Clock skew can leave you unable to access your system. Make sure you are running NTP. In highly secure environments the NTP port may be blocked – in this case you must ensure that the system has a reliable time via a different mechanism.

The .google_authenticator file should never be backed up. If you are on an ext2-based filesystem change the extended attributes with “sudo chattr -dis .google-authenticator”. That marks the file as “no dump”, “immutable”, and “secure delete”. Some backup software honors this extended attribute, other backup software requires you to explicitly add this file to a “don’t archive” list.

Alternatives

The libpam-google-authenticator module requires the user to have a hardware fob or authenticator app. Many people now prefer an approach where the system sends a text message containing a one-time code to a preconfigured number. This is not difficult in the AWS ecosystem since it’s easy to send a text message using the SNS service.

Doing this has the potential benefit of eliminating the need for files containing sensitive information. It depends on whether you’re willing to give up flexibility – using the standard TOPT code allows the user to continue using a hardware fob or authenticator app but requires you keep the sensitive files. Using a random code eliminates the need for a sensitive file but prevents the use of alternate devices.

The potential drawbacks are that you’re replacing one piece of sensitive information with another – the phone number to receive the text message, that the user may not be able to receive the text message, e.g., limited phone service, and a really sophisticated attacker could spoof the phone service and intercept the code.

Another concern is that this approach requires time for the message to be sent, received, and entered. If using a TOPT-based code and a standard ‘tick’ there may only be 31 seconds for this to complete.

I do not know if a PAM module already exists that implements this approach but it would not be difficult to implement if you have access to AWS SNS.

Another alternative is to extend this module so it stores the TOPT secrets in either a database (e.g., sqlite or sleepycat) or retrieves it via LDAP. The former makes it harder for an attacker to discover the secret, the latter removes the secret from the server entirely albeit at the cost of requiring an additional server.

The bottom line

Multi-factor authentication is a valuable security tool. It can make life much more difficult for attackers – the mere presence of MFA may scare off a potential attacker. It allows you to check off boxes in security audits.

However it requires an unencrypted secret so it is worthless against anyone with filesystem or backup media access. At least system passwords are hashed with salts and SSH keys have encryption passwords. TOPT secrets have neither. This means that unauthorized disclosure is immediately fatal since there’s no need to crack the hash or passphrase.

Footnotes

[1] There is one exception to the “no SSH keys on the servers” rule. You might need keys with restricted access for routine operations, e.g., if a cron task needs to execute a command on a different server. In these cases you’ll need a SSH key for the cron task but the destination server should restrict the commands that can be executed using that key.

Comments
No Comments »
Categories
Amazon Web Services (AWS), security
Comments rss Comments rss
Trackback Trackback

Setting Up SSH Identity Forwarding on Jump Hosts

Bear Giles | November 22, 2017

One of the standard security checklist items for AWS EC2 instances is that they should never permit direct SSH access. Instead you should create a single “jump host” that runs nothing but an SSH daemon and is only visible from trusted fixed IP addresses or your VPN. The latter allows access from anywhere but does not introduce a security weakness if you use your own OpenVPN server. That only costs $5/month with a ‘droplet’ at Digital Ocean. (I haven’t been able to get properly VPNs working on EC2 instances.)

The servers all run SSH daemons but you either configure them to only listen to their private IP address and/or configure the VPC rules to only allow access via their private IP address. Their SSH daemon is never visible to the outside world.

To access your servers you first log into the jump host, then log into the final destination. You’re limited to the command line but that shouldn’t be an issue. Some people will enable port forwarding through the jump host but I personally consider SSH port forwarding a temporary solution at best – if you need anything other than the command line you should commit to a VPN solution.

The cost of this approach is trivial. It only requires an AWS EC2 ‘nano’ instance (about $5/month), and even that can be avoided if you only launch this server as needed.

Putting your SSH keys on the jump host would significantly reduce the security of your system. We want to carry our SSH identifies with us as we log into a series of servers. Fortunately that’s easy to do. It only requires changes to two files.

First, we must tell ssh to forward our SSH credentials. This can be done globally or on a per-host basis.

/etc/ssh/ssh_config or ~/.ssh/config

  1. ForwardAgent yes
ForwardAgent yes

Second, we must tell the ssh daemon to allow us to forward our SSH credentials.

/etc/ssh/sshd_config

  1. AllowAgentForwarding yes
AllowAgentForwarding yes

Alternative: NetCat

Local Linux god Kevin Fenzi expressed concern with the ssh-agent approach since anyone with root access on the jump host can reuse your SSH credentials while you’re logged in and recommends using a local ssh configuration that will transparently trigger nc (netcat) on the jump host so you’ll immediately jump to the final destination.

I was unable to get it to work but here’s his example:

  1. Host your-internal-hostname-or-ip
  2.    HostName %h
  3.    ProxyCommand ssh -q yourusename@your-jump-host /usr/bin/nc %h 22
Host your-internal-hostname-or-ip
   HostName %h
   ProxyCommand ssh -q yourusename@your-jump-host /usr/bin/nc %h 22
Comments
No Comments »
Categories
Amazon Web Services (AWS), security
Comments rss Comments rss
Trackback Trackback

Embedded KDC Server using Apache MiniKDC

Bear Giles | November 19, 2017

One of the biggest headaches when working with Kerberos is that you need to set up external files in order to use it. That should be a simple one-time change but it can introduce subtle issues such as forcing developers to be on the corporate VPN when doing a build on their laptop.

The Hadoop developers already have a solution – the “MiniKDC” embedded KDC server. This class can be used to create a temporary KDC in the build environment that eliminates any need for external files or network resources. This approach comes at a cost – on my system it takes about 15 seconds to stand up the embedded server. You don’t want to run these tests on every build but a brief delay during integration tests is better than introducing a dependency on a VPN and a running server.

Update regarding ticket caches on 11/22/2017

Important update on ticket caches (and TGT) on 11/22/2017. I need to emphasize that the standard implementation of the Krb5LoginModule does not create ticket cache files. That may not be clear below. I will post a followup article that discusses using the external kinit program to create Kerberos ticket caches.

Embedded Servers and JUnit 4 Rules

Modern test frameworks have a way to stand up test resources before test and tear them down afterwards. With JUnit 4 this is done with Rules. A rule is an annotation that the test runner recognizes and knows how to use. The details are different in other test frameworks (or JUnit 5) but the underlying concepts are the same.

An embedded KDC is a class-level external resource.

  1. public class EmbeddedKdcResource extends ExternalResource {
  2.     private final File baseDir;
  3.     private MiniKdc kdc;
  4.  
  5.     public EmbeddedKdcResource() {
  6.         try {
  7.             baseDir = Files.createTempDirectory("mini-kdc_").toFile();
  8.         } catch (IOException e) {
  9.             // throw AssertionError so we don't have to deal with handling declared
  10.             // exceptions when creating a @ClassRule object.
  11.             throw new AssertionError("unable to create temporary directory: " + e.getMessage());
  12.         }
  13.     }
  14.  
  15.     /***
  16.      * Start KDC.
  17.      */
  18.     @Override
  19.     public void before() throws Exception {
  20.  
  21.         final Properties kdcConf = MiniKdc.createConf();
  22.         kdcConf.setProperty(MiniKdc.INSTANCE, "DefaultKrbServer");
  23.         kdcConf.setProperty(MiniKdc.ORG_NAME, "EMBEDDED");
  24.         kdcConf.setProperty(MiniKdc.ORG_DOMAIN, "INVARIANTPROPERTIES.COM");
  25.  
  26.         // several sources say to use extremely short lifetimes in test environment.
  27.         // however setting these values results in errors.
  28.         //kdcConf.setProperty(MiniKdc.MAX_TICKET_LIFETIME, "15_000");
  29.         //kdcConf.setProperty(MiniKdc.MAX_RENEWABLE_LIFETIME, "30_000");
  30.  
  31.         kdc = new MiniKdc(kdcConf, baseDir);
  32.         kdc.start();
  33.  
  34.         // this is the standard way to set the default location of the JAAS config file.
  35.         // we don't need to do this since we handle it programmatically.
  36.         //System.setProperty("java.security.krb5.conf", kdc.getKrb5conf().getAbsolutePath());
  37.     }
  38.  
  39.     /**
  40.      * Shut down KDC, delete temporary directory.
  41.      */
  42.     @Override
  43.     public void after() {
  44.         if (kdc != null) {
  45.             kdc.stop();
  46.         }
  47.     }
  48.  
  49.     /**
  50.      * Get realm.
  51.      */
  52.     public String getRealm() {
  53.         return kdc.getRealm();
  54.     }
  55.  
  56.     /**
  57.      * Create a keytab file with entries for specified user(s).
  58.      *
  59.      * @param keytabFile
  60.      * @param names
  61.      * @throws Exception
  62.      */
  63.     public void createKeytabFile(File keytabFile, String... names) throws Exception {
  64.         kdc.createPrincipal(keytabFile, names);
  65.     }
  66. }
public class EmbeddedKdcResource extends ExternalResource {
    private final File baseDir;
    private MiniKdc kdc;

    public EmbeddedKdcResource() {
        try {
            baseDir = Files.createTempDirectory("mini-kdc_").toFile();
        } catch (IOException e) {
            // throw AssertionError so we don't have to deal with handling declared
            // exceptions when creating a @ClassRule object.
            throw new AssertionError("unable to create temporary directory: " + e.getMessage());
        }
    }

    /***
     * Start KDC.
     */
    @Override
    public void before() throws Exception {

        final Properties kdcConf = MiniKdc.createConf();
        kdcConf.setProperty(MiniKdc.INSTANCE, "DefaultKrbServer");
        kdcConf.setProperty(MiniKdc.ORG_NAME, "EMBEDDED");
        kdcConf.setProperty(MiniKdc.ORG_DOMAIN, "INVARIANTPROPERTIES.COM");

        // several sources say to use extremely short lifetimes in test environment.
        // however setting these values results in errors.
        //kdcConf.setProperty(MiniKdc.MAX_TICKET_LIFETIME, "15_000");
        //kdcConf.setProperty(MiniKdc.MAX_RENEWABLE_LIFETIME, "30_000");

        kdc = new MiniKdc(kdcConf, baseDir);
        kdc.start();

        // this is the standard way to set the default location of the JAAS config file.
        // we don't need to do this since we handle it programmatically.
        //System.setProperty("java.security.krb5.conf", kdc.getKrb5conf().getAbsolutePath());
    }

    /**
     * Shut down KDC, delete temporary directory.
     */
    @Override
    public void after() {
        if (kdc != null) {
            kdc.stop();
        }
    }

    /**
     * Get realm.
     */
    public String getRealm() {
        return kdc.getRealm();
    }

    /**
     * Create a keytab file with entries for specified user(s).
     *
     * @param keytabFile
     * @param names
     * @throws Exception
     */
    public void createKeytabFile(File keytabFile, String... names) throws Exception {
        kdc.createPrincipal(keytabFile, names);
    }
}

Functional Tests

Once we have an embedded KDC we can quickly write tests that attempt to get a JAAS LoginContext using Kerberos authentication. We call it a success if LoginContext#login() succeeds.

  1. public class BasicKdcTest {
  2.  
  3.     @ClassRule
  4.     public static final TemporaryFolder tmpDir = new TemporaryFolder();
  5.  
  6.     @ClassRule
  7.     public static final EmbeddedKdcResource kdc = new EmbeddedKdcResource();
  8.  
  9.     private static KerberosPrincipal alice;
  10.     private static KerberosPrincipal bob;
  11.     private static File keytabFile;
  12.     private static File ticketCacheFile;
  13.  
  14.     private KerberosUtilities utils = new KerberosUtilities();
  15.  
  16.     @BeforeClass
  17.     public static void createKeytabs() throws Exception {
  18.         // create Kerberos principal and keytab filename.
  19.         alice = new KerberosPrincipal("alice@" + kdc.getRealm());
  20.         bob = new KerberosPrincipal("bob@" + kdc.getRealm());
  21.         keytabFile = tmpDir.newFile("users.keytab");
  22.         ticketCacheFile = tmpDir.newFile("krb5cc_alice");
  23.  
  24.         // create keytab file containing key for Alice but not Bob.
  25.         kdc.createKeytabFile(keytabFile, "alice");
  26.  
  27.         assertThat("ticket cache does not exist", ticketCacheFile.exists(), equalTo(true));
  28.     }
  29.  
  30.     /**
  31.      * Test LoginContext login without TGT ticket (success).
  32.      *
  33.      * @throws LoginException
  34.      */
  35.     @Test
  36.     public void testLoginWithoutTgtSuccess() throws LoginException {
  37.         final LoginContext lc = utils.getKerberosLoginContext(alice, keytabFile);
  38.         lc.login();
  39.         assertThat("subject does not contain expected principal", lc.getSubject().getPrincipals(),
  40.                 contains(alice));
  41.         lc.logout();
  42.     }
  43.  
  44.     /**
  45.      * Test LoginContext login without TGT ticket (unknown user). This only
  46.      * tests for missing keytab entry, not a valid keytab file with an unknown user.
  47.      *
  48.      * @throws LoginException
  49.      */
  50.     @Test(expected = LoginException.class)
  51.     public void testLoginWithoutTgtUnknownUser() throws LoginException {
  52.         @SuppressWarnings("unused")
  53.         final LoginContext lc = utils.getKerberosLoginContext(bob, keytabFile);
  54.     }
  55.  
  56.     /**
  57.      * Test getKeyTab() method (success)
  58.      */
  59.     @Test
  60.     public void testGetKeyTabSuccess() throws LoginException {
  61.         assertThat("failed to see key", utils.getKeyTab(alice, keytabFile), notNullValue());
  62.     }
  63.  
  64.     /**
  65.      * Test getKeyTab() method (unknown user)
  66.      */
  67.     @Test(expected = LoginException.class)
  68.     public void testGetKeyTabUnknownUser() throws LoginException {
  69.         assertThat("failed to see key", utils.getKeyTab(bob, keytabFile), notNullValue());
  70.     }
  71. }
public class BasicKdcTest {

    @ClassRule
    public static final TemporaryFolder tmpDir = new TemporaryFolder();

    @ClassRule
    public static final EmbeddedKdcResource kdc = new EmbeddedKdcResource();

    private static KerberosPrincipal alice;
    private static KerberosPrincipal bob;
    private static File keytabFile;
    private static File ticketCacheFile;

    private KerberosUtilities utils = new KerberosUtilities();

    @BeforeClass
    public static void createKeytabs() throws Exception {
        // create Kerberos principal and keytab filename.
        alice = new KerberosPrincipal("alice@" + kdc.getRealm());
        bob = new KerberosPrincipal("bob@" + kdc.getRealm());
        keytabFile = tmpDir.newFile("users.keytab");
        ticketCacheFile = tmpDir.newFile("krb5cc_alice");

        // create keytab file containing key for Alice but not Bob.
        kdc.createKeytabFile(keytabFile, "alice");

        assertThat("ticket cache does not exist", ticketCacheFile.exists(), equalTo(true));
    }

    /**
     * Test LoginContext login without TGT ticket (success).
     *
     * @throws LoginException
     */
    @Test
    public void testLoginWithoutTgtSuccess() throws LoginException {
        final LoginContext lc = utils.getKerberosLoginContext(alice, keytabFile);
        lc.login();
        assertThat("subject does not contain expected principal", lc.getSubject().getPrincipals(),
                contains(alice));
        lc.logout();
    }

    /**
     * Test LoginContext login without TGT ticket (unknown user). This only
     * tests for missing keytab entry, not a valid keytab file with an unknown user.
     *
     * @throws LoginException
     */
    @Test(expected = LoginException.class)
    public void testLoginWithoutTgtUnknownUser() throws LoginException {
        @SuppressWarnings("unused")
        final LoginContext lc = utils.getKerberosLoginContext(bob, keytabFile);
    }

    /**
     * Test getKeyTab() method (success)
     */
    @Test
    public void testGetKeyTabSuccess() throws LoginException {
        assertThat("failed to see key", utils.getKeyTab(alice, keytabFile), notNullValue());
    }

    /**
     * Test getKeyTab() method (unknown user)
     */
    @Test(expected = LoginException.class)
    public void testGetKeyTabUnknownUser() throws LoginException {
        assertThat("failed to see key", utils.getKeyTab(bob, keytabFile), notNullValue());
    }
}

Next Steps

The next article will discuss the Apache Hadoop UserGroupInformation class and how it connects to JAAS authentication.

Source

You can download the source for this article here: JAAS with Kerberos; Unit Test using Apache Hadoop Mini-KDC.

Comments
No Comments »
Categories
hadoop, java, security
Comments rss Comments rss
Trackback Trackback

JAAS without configuration files; JAAS and Kerberos

Bear Giles | November 19, 2017

Java’s JAAS abstraction is a powerful tool to handle authentication but it has two major weaknesses in practice. First, nearly all of the discussions on how to use it assume that the developer can write the JAAS configuration file to a secure location. That may not be easy in a hosted environment. Second, JAAS has some unexpected behavior that made sense at the time but which can bite developers today. Neither is difficult to overcome once you know the solution.

This article will discuss the solution to these problems and give a concrete example using Kerberos authentication. Kerberos is widely used in the Hadoop ecosystem and a future article will discuss how to use the Hadoop-specific UserGroupInformation class.

Limitations

Unfortunately this code does not completely eliminate the need for external files. First, we must still provide an explicit Kerberos keytab file. I will update this article if I find an approach that eliminates this limitation.

Second, we must still provide an external krb5.conf configuration file. This is required by the Krb5LoginModule class and I think we’re limited to changing the location of this file.

Update regarding ticket caches on 11/22/2017

Important update on ticket caches (and TGT) on 11/22/2017. I need to emphasize that the standard implementation of the Krb5LoginModule does not create ticket cache files. That may not be clear below. I will post a followup article that discusses using the external kinit program to create Kerberos ticket caches.

The JAAS Configuration Class

The first issue to discuss is eliminating the need for a JAAS configuration file. We want to be able to configure JAAS programmatically, perhaps using information provided via a traditional database or a cloud discovery service such as Spring Cloud Config. JAAS provides an oft-overlooked class that can replace the external configuration: javax.security.auth.login.Configuration. The most general solution is to create a class that accepts a Map in its constructor and uses it to create an array of AppConfigurationEntry values.

  1. class CustomLoginConfiguration extends javax.security.auth.login.Configuration {
  2.     private static final String SECURITY_AUTH_MODULE_KRB5_LOGIN_MODULE =
  3.             "com.sun.security.auth.module.Krb5LoginModule";
  4.  
  5.     private final Map<String, String> entries = new HashMap<>();
  6.  
  7.     /**
  8.      * Constructor taking a Map of parameters
  9.      */
  10.     public CustomLoginConfiguration(Map<String, Map<String, String>> params) {
  11.         for (Map.Entry<String, Map<String, String>gt; entry : params.entrySet()) {
  12.             entries.put(entry.getKey(),
  13.                     new AppConfigurationEntry(SECURITY_AUTH_MODULE_KRB5_LOGIN_MODULE,
  14.                 AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, entry.getValue()));
  15.         }
  16.     }
  17.  
  18.     /**
  19.      * Get entry.
  20.      */
  21.     @Override
  22.     public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
  23.         if (entries.containsKey(name)) {
  24.             return new AppConfigurationEntry[] { entries.get(name) };
  25.         }
  26.         return new AppConfigurationEntry[0];
  27.     }
  28. }
class CustomLoginConfiguration extends javax.security.auth.login.Configuration {
    private static final String SECURITY_AUTH_MODULE_KRB5_LOGIN_MODULE =
            "com.sun.security.auth.module.Krb5LoginModule";

    private final Map<String, String> entries = new HashMap<>();

    /**
     * Constructor taking a Map of parameters
     */
    public CustomLoginConfiguration(Map<String, Map<String, String>> params) {
        for (Map.Entry<String, Map<String, String>gt; entry : params.entrySet()) {
            entries.put(entry.getKey(),
                    new AppConfigurationEntry(SECURITY_AUTH_MODULE_KRB5_LOGIN_MODULE,
                AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, entry.getValue()));
        }
    }

    /**
     * Get entry.
     */
    @Override
    public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
        if (entries.containsKey(name)) {
            return new AppConfigurationEntry[] { entries.get(name) };
        }
        return new AppConfigurationEntry[0];
    }
}

In practice we’ll only need one or two JAAS configurations in our application and it may be more maintainable to write a convenience class. It is very easy to use an external configuration file to identify the required properties and then convert the final file into a static method that populates the Map.

  1. import static java.lang.Boolean.TRUE;
  2.  
  3. class Krb5WithKeytabLoginConfiguration extends CustomLoginConfiguration {
  4.  
  5.     /**
  6.      * Constructor taking basic Kerberos properties.
  7.      *
  8.      * @param serviceName JAAS service name
  9.      * @param principal Kerberos principal
  10.      * @param keytabFile keytab file containing key for this principal
  11.      */
  12.     public Krb5WithKeytabLoginConfiguration(String serviceName, KerberosPrincipal principal, File keytabFile);
  13.        super(serviceName, makeMap(principal, keytabFile);
  14.     }
  15.  
  16.     /**
  17.      * Static method that creates the Map required by the parent class.
  18.      *
  19.      * @param principal Kerberos principal
  20.      * @param keytabFile keytab file containing key for this principal
  21.      */
  22.     private static Map<String, String> makeMap(KerberosPrincipal principal, File keytabFile) {
  23.         final Map<String, String> map = new HashMap<gt;();
  24.  
  25.         // this is the basic Kerberos information
  26.         map.put("principal", principal.getName());
  27.         map.put("useKeyTab", TRUE.toString());
  28.         map.put("keyTab", keytabFile.getAbsolutePath());
  29.  
  30.         // 'fail fast'
  31.         map.put("refreshKrb5Config", TRUE.toString());
  32.  
  33.         // we're doing everything programmatically so we never want to prompt the user.
  34.         map.put("doNotPrompt", TRUE.toString());
  35.         return map;
  36.     }
  37. }
import static java.lang.Boolean.TRUE;

class Krb5WithKeytabLoginConfiguration extends CustomLoginConfiguration {

    /**
     * Constructor taking basic Kerberos properties.
     *
     * @param serviceName JAAS service name
     * @param principal Kerberos principal
     * @param keytabFile keytab file containing key for this principal
     */
    public Krb5WithKeytabLoginConfiguration(String serviceName, KerberosPrincipal principal, File keytabFile);
       super(serviceName, makeMap(principal, keytabFile);
    }

    /**
     * Static method that creates the Map required by the parent class.
     *
     * @param principal Kerberos principal
     * @param keytabFile keytab file containing key for this principal
     */
    private static Map<String, String> makeMap(KerberosPrincipal principal, File keytabFile) {
        final Map<String, String> map = new HashMap<gt;();

        // this is the basic Kerberos information
        map.put("principal", principal.getName());
        map.put("useKeyTab", TRUE.toString());
        map.put("keyTab", keytabFile.getAbsolutePath());

        // 'fail fast'
        map.put("refreshKrb5Config", TRUE.toString());

        // we're doing everything programmatically so we never want to prompt the user.
        map.put("doNotPrompt", TRUE.toString());
        return map;
    }
}

The JAAS CallbackHandler and LoginContext Classes

A nasty surprise for many developers is that the JAAS implementation will fall back to a non-trivial default implementation if a custom CallbackHandler is not provided. At best this will result in confusing error messages, at worst an attacker can override the default implementation with one that is much more open than the developer intended.

Fortunately this is easy to handle when creating the JAAS LoginContext. We could use an empty handler method but it doesn’t hurt to log any messages in case there’s a problem.

This example uses the LoginContext method that takes a suggested Subject. It may be possible to provide the keytab information via the Subject’s private credentials instead of passing in an explicit file location via the ‘keyTab’ property but I haven’t found it yet. I’ve left the code in place in case it will help others.

  1. class KerberosUtilities {
  2.     private static final Logger LOG = LoggerFactory.getLogger(KerberosUtilities.class);
  3.  
  4.     /**
  5.      * Get JAAS LoginContext for specified Kerberos parameters
  6.      *
  7.      * @param principal Kerberos principal
  8.      * @param keytabFile keytab file containing key for this principal
  9.      */
  10.     public LoginContext getKerberosLoginContext(KerberosPrincipal principal, File keytabFile)
  11.             throws LoginException, ConfigurationException {
  12.  
  13.         final KeyTab keytab = getKeyTab(keytabFile, principal);
  14.  
  15.         // create Subject containing basic Kerberos parameters.
  16.         final Set<Principal> principals = Collections.<Principal> singleton(principal);
  17.         final Set<?> pubCredentials = Collections.emptySet();
  18.         final Set<?> privCredentials = Collections.<Object> singleton(keytab);
  19.         final Subject subject = new Subject(false, principals, pubCredentials, privCredentials);
  20.  
  21.         // create LoginContext using this subject.
  22.         final String serviceName = "krb5";
  23.         final LoginContext lc = new LoginContext(serviceName, subject,
  24.                 new CallbackHandler() {
  25.                     public void handle(Callback[] callbacks) {
  26.                         for (Callback callback : callbacks) {
  27.                             if (callback instanceof TextOutputCallback) {
  28.                                 LOG.info(((TextOutputCallback) callback).getMessage());
  29.                             }
  30.                         }
  31.                     }
  32.                 }, new Krb5KeytabLoginConfiguration(serviceName, principal, keytabFile);
  33.  
  34.         return lc;
  35.     }
  36.  
  37.     /**
  38.      * Convenience method that verifies keytab file exists, is readable, and contains appropriate entry.
  39.      */
  40.     public KeyTab getKeyTab(File keytabFile, KerberosPrincipal principal)
  41.             throws LoginException {
  42.  
  43.         if (!keytabFile.exists() || !keytabFile.canRead()) {
  44.             throw new LoginException("specified file does not exist or cannot be read");
  45.         }
  46.  
  47.         // verify keytab file exists
  48.         KeyTab keytab = KeyTab.getInstance(principal, keytabFile);
  49.         if (!keytab.exists()) {
  50.             throw new LoginException("specified file is not a keytab file");
  51.         }
  52.  
  53.         // verify keytab file actually contains at least one key for this principal.
  54.         KerberosKey[] keys = keytab.getKeys(principal);
  55.         if (keys.length == 0) {
  56.             throw new LoginException("keytab file does not contain required entry");
  57.         }
  58.  
  59.         // destroy keys since we don't need them, we just need to make sure they exist.
  60.         for (KerberosKey key : keys) {
  61.             try {
  62.                 key.destroy();
  63.             } catch (DestroyFailedException e) {
  64.                 LOG.debug("unable to destroy key");
  65.             }
  66.         }
  67.  
  68.         return keytab;
  69.     }
  70. }
class KerberosUtilities {
    private static final Logger LOG = LoggerFactory.getLogger(KerberosUtilities.class);

    /**
     * Get JAAS LoginContext for specified Kerberos parameters
     *
     * @param principal Kerberos principal
     * @param keytabFile keytab file containing key for this principal
     */
    public LoginContext getKerberosLoginContext(KerberosPrincipal principal, File keytabFile)
            throws LoginException, ConfigurationException {

        final KeyTab keytab = getKeyTab(keytabFile, principal);

        // create Subject containing basic Kerberos parameters.
        final Set<Principal> principals = Collections.<Principal> singleton(principal);
        final Set<?> pubCredentials = Collections.emptySet();
        final Set<?> privCredentials = Collections.<Object> singleton(keytab);
        final Subject subject = new Subject(false, principals, pubCredentials, privCredentials);

        // create LoginContext using this subject.
        final String serviceName = "krb5";
        final LoginContext lc = new LoginContext(serviceName, subject,
                new CallbackHandler() {
                    public void handle(Callback[] callbacks) {
                        for (Callback callback : callbacks) {
                            if (callback instanceof TextOutputCallback) {
                                LOG.info(((TextOutputCallback) callback).getMessage());
                            }
                        }
                    }
                }, new Krb5KeytabLoginConfiguration(serviceName, principal, keytabFile);

        return lc;
    }

    /**
     * Convenience method that verifies keytab file exists, is readable, and contains appropriate entry.
     */
    public KeyTab getKeyTab(File keytabFile, KerberosPrincipal principal)
            throws LoginException {

        if (!keytabFile.exists() || !keytabFile.canRead()) {
            throw new LoginException("specified file does not exist or cannot be read");
        }

        // verify keytab file exists
        KeyTab keytab = KeyTab.getInstance(principal, keytabFile);
        if (!keytab.exists()) {
            throw new LoginException("specified file is not a keytab file");
        }

        // verify keytab file actually contains at least one key for this principal.
        KerberosKey[] keys = keytab.getKeys(principal);
        if (keys.length == 0) {
            throw new LoginException("keytab file does not contain required entry");
        }

        // destroy keys since we don't need them, we just need to make sure they exist.
        for (KerberosKey key : keys) {
            try {
                key.destroy();
            } catch (DestroyFailedException e) {
                LOG.debug("unable to destroy key");
            }
        }

        return keytab;
    }
}

The Final Bits for Kerberos

There is one final problem. The Krb5LoginModule expects the Kerberos configuration file to be located at a standard location, typically /etc/krb5.conf on Linux systems. This can be overridden with the java.security.krb5.kdc system property.

The default realm is usually set in the Kerberos configuration file. You can override it with the java.security.krb5.realm system property.

Cloudera (Hadoop Cluster)

You must set one additional system property, at least when using Cloudera clients with Hive:

  • javax.security.auth.useSubjectCredsOnly=false

For more information on this see Hive JDBC client error when connecting to Kerberos Cloudera cluster .

Debugging

Finally there are several useful system properties if you are stuck:

  • sun.security.krb5.debug=true
  • java.security.debug=gssloginconfig,configfile,configparser,logincontext

Next Steps

The next article will discuss writing unit tests using an embedded KDC server.

Source

You can download the source for this article here: JAAS with Kerberos; Unit Test using Apache Hadoop Mini-KDC.

Comments
No Comments »
Categories
hadoop, java, security
Comments rss Comments rss
Trackback Trackback

Proactive Database Defenses Using Triggers

Bear Giles | January 15, 2017

I’m sure I’ve discussed this a number of years ago but a question came up after the recent Boulder Linux User Group meeting and I decided this would be a good time to revisit it.

The question is how do you protected sensitive information from illicit insertion or modification when the attacker has full SQL access as the website user?

Important: I am focused on approaches we can use in the database itself, not our application, since the former will protect our data even if an attacker has full access to the database. These approaches are invisible to our database frameworks, e.g., JPA, once we have created the tables.

An Approach Without Triggers

At a minimum we can ensure that the database was properly configured with multiple users:

app_owner – owns the schema and tables. Often does not have INSERT/UPDATE/DELETE (or even SELECT) privileges on the tables.

app_user – owns the data but cannot modify the schema, tables, etc.

We can make this much more secure by splitting app_user into two users, app_reader and app_writer. The former user only has SELECT privileges on the tables. This is the only account used by user-facing code. The app_writer user adds INSERT/UPDATE/DELETE privileges and is only used by the methods that actually need to modify the data. Data is typically read so much more often that it is written that it often makes sense to view an application as actually two (or more) separate but related applications. In fact they may be – you can improve security by handling any data manipulation via microservices only visible to the application.

There is a big downside to this – modern database frameworks, e.g., JPA or Hibernate, make heavy use of caching to improve performance. You need to ensure that the the cache is properly updated in the app_reader cache whenever the corresponding record(s) are updated in the app_writer account.

Security Defense

This is highly database specific – does the database maintain logs that show when a user attempts to perform a non-permitted action? If so you can watch the logs on the app_reader account. Any attempt to insert or update data is a strong indication of an attacker.

Triggers Based On Related Information

A 3NF (or higher) database requires that each column be independent. In practice we often perform partial denormalization for performance reasons, e.g., adding a column for the day of the week in addition to the full date. We can easily compute the former from the latter but it takes time and can’t be indexed.

There’s a risk that a bug or intruder will introduce inconsistencies. One common solution is to use an INSERT OR UPDATE trigger that calculates the value at the time the data is inserted into the database. E.g.,

  1. CREATE FUNCTION calculate_day_of_week() ....
  2.  
  3. CREATE TABLE date_with_dow (
  4.     date text,
  5.     dow  text
  6. );
  7.  
  8. CREATE FUNCTION set_day_of_week() RETURNS trigger AS $$
  9.     BEGIN
  10.         NEW.date = OLD.date;
  11.         NEW.dow = calculate_day_of_week(OLD.date);
  12.         RETURN NEW;
  13.     END;
  14. $$ LANGUAGE plpgsql;
  15.  
  16. CREATE TRIGGER set_day_of_week BEFORE INSERT OR UPDATE ON date_with_dow
  17.    FOR EACH ROW EXECUTE PROCEDURE set_day_of_week();
CREATE FUNCTION calculate_day_of_week() ....

CREATE TABLE date_with_dow (
    date text,
    dow  text
);

CREATE FUNCTION set_day_of_week() RETURNS trigger AS $$
    BEGIN
        NEW.date = OLD.date;
        NEW.dow = calculate_day_of_week(OLD.date);
        RETURN NEW;
    END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER set_day_of_week BEFORE INSERT OR UPDATE ON date_with_dow
   FOR EACH ROW EXECUTE PROCEDURE set_day_of_week();

This ensures that the day of week is properly set. A software bug, or attacker, can try specifying an invalid value but they’ll fail.

Of course we don’t really care (much) if the day of the week is incorrect. However there are other times when we care a great deal, e.g., cached attributes from digital certificates. If someone can insert a certificate with mismatched cached values, esp. if it they can replace an existing table entry, then they can do a lot of damage if the code doesn’t assume that the database could be corrupted and thus perform its own validation checks on everything it gets back. (First rule of security: never trust anything.) Even with tests we’ll only know that the data has been corrupted, not when and not how broadly.

Security Defense

Developers are information packrats. Can we learn anything from the provided day of week value?

Yes. It’s a huge red flag if the provided value doesn’t match the calculated value (modulo planned exceptions, e.g., passing null or a sentinel value to indicate that the application is deferring to the database). It’s easy to add a quick test:

  1. CREATE FUNCTION calculate_day_of_week() ....
  2.  
  3. -- user-defined function that can do anything from adding an entry into
  4. -- a table to sending out an email, SMS, etc., alert
  5. CREATE FUNCTION security_alert() ....
  6.  
  7. CREATE TABLE date_with_dow (
  8.     date text,
  9.     dow  text
  10. );
  11.  
  12. CREATE FUNCTION set_day_of_week() RETURNS rigger AS $$
  13.     DECLARE
  14.         calculated_dow text;
  15.     BEGIN
  16.         NEW.date = OLD.date;
  17.         NEW.dow = calculate_day_of_week(OLD.date);
  18.         IF (NEW.dow  OLD.date) THEN
  19.             security_alert("bad dow value!");
  20.             RETURN null;
  21.         END IF;
  22.         RETURN NEW;
  23.     END;
  24. $$ LANGUAGE plpgsql;
  25.  
  26. CREATE TRIGGER set_day_of_week BEFORE INSERT OR UPDATE ON date_with_dow
  27.     FOR EACH ROW EXECUTE PROCEDURE set_day_of_week();
CREATE FUNCTION calculate_day_of_week() ....

-- user-defined function that can do anything from adding an entry into
-- a table to sending out an email, SMS, etc., alert
CREATE FUNCTION security_alert() ....

CREATE TABLE date_with_dow (
    date text,
    dow  text
);

CREATE FUNCTION set_day_of_week() RETURNS rigger AS $$
    DECLARE
        calculated_dow text;
    BEGIN
        NEW.date = OLD.date;
        NEW.dow = calculate_day_of_week(OLD.date);
        IF (NEW.dow  OLD.date) THEN
            security_alert("bad dow value!");
            RETURN null;
        END IF;
        RETURN NEW;
    END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER set_day_of_week BEFORE INSERT OR UPDATE ON date_with_dow
    FOR EACH ROW EXECUTE PROCEDURE set_day_of_week();

Sidenote: check out your database documentation for more ideas. For instance many applications use @PrePersist annotations to autofill creationDate and lastUpdateDate. It’s easy to do this via a trigger – and using a trigger ensures that the data updated even if an attacker does it via SQL injection or direct access. More impressively you can write audit information to a separate table, perhaps even in a separate schema that the app_user only has INSERT privileges for in order to prevent an attacker from learning what the system has learned about them, much less altering or deleting that information.

I’ve written triggers that generate XML representations of the OLD and NEW values and write them to an audit table together with date, etc. On INSERT the OLD data is null, on DELETE the NEW data is null. Using XML allows us to use a common audit table (table name is just a field) and potentially allows you to add transaction id, etc.

It is then easy to use a bit of simple XML diff code to see exactly what changed when by reviewing the audit table.

Resources:

  • PostgreSQL
  • MySQL
  • Oracle

Triggers Based On Secrets

What about tables where there’s no “related” columns? Can we use a trigger to detect an illicit attempt to INSERT or UPDATE a record?

Yes!

In this case we want to add an extra column to the table. It can be anything – the sole purpose is to create a way to pass a validation token to the trigger.

What are validation tokens?

A validation token can be anything you want. A few examples are:

A constant – this is the easiest but will be powerful as long as you can keep it secret. An example is ’42’. An obvious variant is the sum of several of the other columns of the table. This value should not be written to the database or it will be exposed to anyone with SELECT privileges.

A time-based value – your webserver and database will have closely synced clocks so you can use a time-based protocol such as Time-based One-time Password (TOTP) Algorithm. If both the database and application servers use NTP you can keep the window as small as a few seconds. Just remember to include one tick on either side when validating the token – NTP keeps the clocks synchronized but there can still be a very small skew plus network latency to consider.

Note: TOTP requires a shared secret and is independent of the contents of the INSERT or UPDATE statement.

You can save a time-based value but it is meaningless without a timestamp – and some algorithms can be cracked if you have a series of values and the starting time.

An HMAC value – most people will be familiar with standard cryptographic hashes such as MD-5 or SHA-1 (both considered cracked) or SHA-256. They’re powerful tools – but in part because everyone will compute the same values given the same input.

In our case we want an HMAC – it is a cryptographically strong message digest that also requires an encryption key. An attacker cannot generate their own HMAC but anyone with the corresponding digital certificate can verify one. An HMAC value requires something in the message to be processed and it needs to be intrinsic to the value of the record. For instance a digital certificate, a PDF document, even a hashed password. Don’t use it to hash the primary key or any value that can be readily reused.

You can freely save an HMAC value.

Subsequent validation

We would like to know that values haven’t been corrupted, e.g., by an attacker knowledgeable enough to disable the trigger, insert bad values, and then restore the trigger. The last step is important since we can / should run periodic scans to ensure all security-related features like these database triggers are still in place. Can we use these techniques to validate the records after the fact?

Constant value: no.

Time-base value: only if we record a timestamp as well, and if we do then we have to assume that the secret has been compromised. So… no.

HMAC value: yes.

Backups and Restorations

Backups and restorations have the same problems as subsequent validations. You can’t allow any magic values to be backed up (or an attacker could learn it by stealing the backup media) and you can’t allow the time-based values plus timestamps to be backed up (or an attacker could learn the shared secret by stealing the backup media). That means you would need to disable the trigger when restoring data to the database and you can’t verify that it’s properly validated afterwards. Remember: you can’t trust backup media!

The exception is HMAC tokens. They can be safely backed up and restored even if the triggers are in place.

Security Defense

You can add a token column to any table. As always it’s a balance between security and convenience and the less powerful techniques may be Good Enough for your needs. But for highly sensitive records, esp. those that are inserted or updated relatively infrequently, an HMAC token may be a good investment.

Implementation-wise: on the application side you can write a @PrePersist method that handles the creation of the TOTP or HMAC token. It’s a standard calculation and the biggest issue, as always, is key management. On the database side you’ll need to have a crypto library that supports whatever token method you choose.

Shadow Tables

Finally, there are two concerns with the last approach. First, it requires you to have crypto libraries available in your database. That may not be the case. Second if a value is inserted it’s impossible know for sure that it was your application that inserted it.

There’s a solution to this which is entirely app-side. It might not give you immediate notification of a problem but it still gives you some strong protection when you read the data.

You start as above – add a @PrePersist method that calculates an HMAC code. Only now you edit the domain bean so that the HMAC column uses a @SecondaryTable instead of the main table. (I think you can even specify a different schema if you want even higher security.) From the data perspective this is just two tables with a 1:1 relationship but from the source code perspective it’s still a single object.

Putting this into a separate table, if not a separate schema as well, means that a casual attacker will not know that it is there. They might succeed in inserting or modifying data but not realize that the changes will be detected even if audit triggers are disabled.

The final step is adding a @PostLoad method that verifies the HMAC code. If it’s good you can have confidence the data hasn’t been corrupted. If it’s incorrect or missing you know there’s a problem and you shouldn’t trust it.

For advanced users the developer won’t even know that the extra data is present – you can do a lot with AOP and some teams are organized so that the developers write unsecured code and a security team – which is focused entirely on security, not features – is responsible for adding security entirely through AOP code interwoven into the existing code. But that’s a topic for a future blog….

Comments
No Comments »
Categories
CEU, PostgreSQL, security
Comments rss Comments rss
Trackback Trackback

Encrypt your usernames and email addresses!

Bear Giles | June 13, 2016

The possibility of a massive twitter password breach brings up a basic requirement in our world. I discussed encrypting personally identifiable information (PII) last August but didn’t mention login information specifically.

Encrypt your usernames and email addresses.

Use a unique, random salt/IV for each user.

This won’t stop a dedicated attacker with resources but it will slow them down and that buys time for you to discover the breach and for your users to change their passwords. Maybe not much time, maybe a week instead of a few days until a majority of passwords is cracked, but that could make a big difference for a lot of users.

How do I find a user if their username and email is encrypted?

This is easy to answer – also store a hash of each user’s username and email. Think HashMap – not unique identifier – it’s okay if there are a few users in each ‘bin’. In fact this is desirable since it makes the job of the attacker a little bit harder. You can use the last few bytes of a SHA hash.

You can efficiently determine the correct user – compute the password hash for each potential match using their unique hash salt. If there’s no matches you know it’s an invalid username and/or password. If there is you can encrypt the provided username or password, again using the unique salt (possibly different from the password hash salt) and see if there’s a match.

(Note: if you compare ciphertext you must make sure it does not contain the random salt/IV. Otherwise you’ll need to decrypt the values and compare the plaintext.)

Can I store a hash of the username or email address instead?

Good idea! The answer is maybe. It depends on your data model. That brings us to the next point…

Any other ideas to improve my security?

Think microservices. Even if you have a monolithic application you probably have it broken apart into separate components internally – e.g., an e-commerce site probably has user management and authentication, inventory, shopping cart, fulfillment, payment, etc. There’s no reason these components need to share the same database – they can use separate schemas and database accounts.

You can take that a step further and use different physical databases for critical information. E.g., the user authentication information can (and arguably should) be stored on a different server than anything user-facing.

If you do this then you can store a hash of the username and email address in your authentication tables and database. Again this will only slow down a dedicated attacker (they’ll have likely usernames and email addresses from prior attacks) but it buys you, and your users, time.

Another good idea is to periodically reencrypt everything with a new key. An attacker might not be able to obtain a copy of the database and the encryption key at the same time so this gives an extra measure of security. You might be tempted to use multiple encryption keys, e.g., one identified by the month they registered, but I doubt the improved security is worth the increased complexity. IMHO it’s much better to use a single key that you rotate on a regular basis.

I’m small fry – who would target me? Why should I bother with this?

You might be “small fry” but many of your users will use the same password on other sites. Attackers know this and that “small fry” often have lax security out of ignorance or the misguided belief that they’ll never be the target of an attack.

Comments
No Comments »
Categories
CEU, security
Comments rss Comments rss
Trackback Trackback

Introduction to Kerberos – PostgreSQL (part 2)

Bear Giles | April 21, 2016

The the first part of this introduction I discussed setting up the Kerberos infrastructure and creating users. In this part I will demonstrate how to use it to connect to a PostgreSQL server.

Note: GSSAPI is a generic security interface that can use different protocols on the backend. Kerberos is one of many protocols that can be used with GSSAPI. Using it allows developers to write flexible code while still allowing the system administrator to have the final word on which authentication method(s) will be accepted. When given a choice you should enable GSSAPI instead of just Kerberos.

Configuring the Server

The default servicename for PostgreSQL is ‘postgres’. There is one notable exception – working with Active Directory requires a servicename of ‘POSTGRES’ and that requires recompiling the server. The full principal for a PostgreSQL server is ‘postgres/fqdn@realm’ although in a some situations you’ll need to specify postgres@fqdn.

We start by creating a keytab file for the server:

  1. $ kadmin
  2. kadmin% ank -randkey postgres/db.invariantproperties.com
  3. kadmin% ktadd -k krb5.keytab postgres/db.invariantproperties.com
$ kadmin
kadmin% ank -randkey postgres/db.invariantproperties.com
kadmin% ktadd -k krb5.keytab postgres/db.invariantproperties.com

Note that the value of the fully-qualified domain name (fqdn) is critical. The server and client must both recognize it as the name of the server – the server uses it to identify which keytab entry to use as it comes up and the client uses it when it constructs its query to the KDC. Ideally the name will be fully supported by DNS – including reverse lookups – but it’s often enough to put entries into the /etc/hosts files if you’re only working with a few systems.

Once you have your keytab file you should copy it to /etc/postgresql/9.4/main/krb5.keytab and change the file’s ownership. You should also make sure that the file can only be read by the server.

  1. $ sudo chown postgres:postgres krb5.keytabe
  2. $ sudo chmod 0600 krb5.keytab
$ sudo chown postgres:postgres krb5.keytabe
$ sudo chmod 0600 krb5.keytab

We now need to tell the server the location of the keytab file and to listen to all network interfaces. Kerberos can only be used on TCP connections.

/etc/postgresql/9.4/main/postgresql.conf

  1. ...
  2. krb_server_keyfile = '/etc/postgresql/9.4/main/krb5.keytab'
  3. listen_addresses = '*'
...
krb_server_keyfile = '/etc/postgresql/9.4/main/krb5.keytab'
listen_addresses = '*'

We must also tell the server which databases can be accessed via Kerberos. Security note: in production we never want to use a wildcard (‘all’) for both database and user on a single line.

/etc/postgresql/9.4/main/pg_hba.conf

  1. # TYPE  DATABASE        USER            ADDRESS                 METHOD       OPTIONS
  2. host    all             all             52.34.69.195/32         gss          include_realm=1 map=gss krb_realm=INVARIANTPROPERTIES.COM
# TYPE  DATABASE        USER            ADDRESS                 METHOD       OPTIONS
host    all             all             52.34.69.195/32         gss          include_realm=1 map=gss krb_realm=INVARIANTPROPERTIES.COM

We will always want to retain the Kerberos realm so we don’t confuse ‘bob@FOO.COM’ with ‘bob@BAZ.COM’ but this requires us to use the pg_ident mapping file. The name of the mapping is arbitrary – I’m using ‘gss’ for convenience.

In this case we’re being even stricter and requiring that the Kerberos domain match ‘INVARIANTPROPERTIES.COM’ specifically. This isn’t always possible but you can repeat the line if you only need to support a few realms.

We must add entries to the server’s identity lookup file. The easiest approach is to add authorized users directly:

/etc/postgresql/9.4/main/pg_ident.conf

  1. # MAPNAME    SYSTEM-USERNAME                    PG-USERNAME
  2. gss          bgiles@INVARIANTPROPERTIES.COM     bgiles
# MAPNAME    SYSTEM-USERNAME                    PG-USERNAME
gss          bgiles@INVARIANTPROPERTIES.COM     bgiles

This is not a good long-term solution since you must manually restart the database, or at least reload the configuration files, every time you need to add or remove an authorized user. It’s much better to qualify the username, e.g., with ‘/postgres’, and then use regular expression matching in the pg_ident file.

/etc/postgresql/9.4/main/pg_ident.conf

  1. # MAPNAME    SYSTEM-USERNAME                                    PG-USERNAME
  2. gss           /^([^/]+)\/postgres@INVARIANTPROPERTIES\.COM$     \1
# MAPNAME    SYSTEM-USERNAME                                    PG-USERNAME
gss           /^([^/]+)\/postgres@INVARIANTPROPERTIES\.COM$     \1

Important: do not share Kerberos credentials. Either map multiple Kerberos users to the same PostgreSQL identity or map them to different PostgreSQL identities and use the standard grants and roles to control access within the database.

We’re now ready to restart the PostgreSQL server.

  1. $ sudo service postgresql restart
$ sudo service postgresql restart

Example

The following is an example dialogue as I attempt to log into the server.

  1. # try to log in without any Kerberos tickets
  2. bgiles@snowflake:~$ psql -h kpg
  3. psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may provide more information
  4. GSSAPI continuation error: No Kerberos credentials available
  5.  
  6. # log in as regular user, try to connect. The error is a little confusing but we do not get access.
  7. bgiles@snowflake:~$ kinit bgiles
  8. Password for bgiles@INVARIANTPROPERTIES.COM:
  9.  
  10. bgiles@snowflake:~$ psql -h kpg
  11. psql: duplicate GSS authentication request
  12.  
  13. # log in as database user, try to connect.
  14. bgiles@snowflake:~$ kinit bgiles/postgres
  15. Password for bgiles/postgres@INVARIANTPROPERTIES.COM:
  16.  
  17. bgiles@snowflake:~$ psql -h kpg
  18. psql (9.4.7, server 9.4.6)
  19. SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
  20. Type "help" for help.
  21.  
  22. bgiles=#
# try to log in without any Kerberos tickets
bgiles@snowflake:~$ psql -h kpg
psql: GSSAPI continuation error: Unspecified GSS failure.  Minor code may provide more information
GSSAPI continuation error: No Kerberos credentials available

# log in as regular user, try to connect. The error is a little confusing but we do not get access.
bgiles@snowflake:~$ kinit bgiles
Password for bgiles@INVARIANTPROPERTIES.COM: 

bgiles@snowflake:~$ psql -h kpg
psql: duplicate GSS authentication request

# log in as database user, try to connect.
bgiles@snowflake:~$ kinit bgiles/postgres
Password for bgiles/postgres@INVARIANTPROPERTIES.COM: 

bgiles@snowflake:~$ psql -h kpg
psql (9.4.7, server 9.4.6)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

bgiles=# 

Establishing a JDBC Connection

Connecting to the database via the CLI demonstrates that we have properly set up Kerberos authentication but in practice we will want to access the database programmatically. That means JDBC connections.

Is it possible?

The answer is yes and no. I’ll cover it in further detail in the next part but it’s easy to establish the connection but there’s an unexplained problem when mapping the GSS/Kerberos identity to the PostgreSQL identity. Investigation continues…

PostgreSQL currently supports Kerberos authentication if you use a simple Kerberos principal (user) and password as your connection properties. It does not support a compound Kerberos principal as I discussed above, nor does it support the use of keytab files.

I plan to submit a patch to support keytabs in the next few weeks. I don’t know how long review will take and of course it’s unlikely to be applied retroactively. Write me if you want a copy.

What About Other Databases?

Other databases also support Kerberos.

Oracle: Oracle has supported Kerberos for a long time with an optional security extension. In 2013 Oracle made it free to use for all versions of its database. I haven’t had a chance to see if I can use it with Oracle 11 XE for Linux. It was cut at about the same time but it can be tricky to set up Oracle XE on Linux. (I had it working on Ubuntu 14.04 LTS but then a routine package update broke it.)

Microsoft SQLServer: SQLServer has also supported Kerberos for a long time, at least in its Active Directory wolf’s clothing. I would expect MSSQL XE to also support AD/Kerberos.

MySQL/MariaDB: MySQL does not support Kerberos. MariaDB does via a plugin but I haven’t been able to bring my MariaDB server up to verify this.

Cassandra: Cassandra supports Kerberos. See Datastax.

MongoDB: MongoDB supports Kerberos.

Neo4j: No information.

Apache Hive: Hive supports Kerberos.

H2/Derby/embedded databases: They do not appear to support Kerberos but I can’t say this with any certainty.

(Note: if you’re not familiar with Oracle XE and MSSQL XE they’re the ‘first hit is free’ versions of the respective databases. You can use them on your website or application as long as the server is limited to a single CPU and you do not have more than 10 GB (iirc) of data. They’re usually one rev back from the commercial product. A Personal Use license lets you use the latest versions of the software but there are extremely tight restrictions – it’s basically only useful when learning how to use the respective database.)

Next Time

Next time I will discuss establishing low-level connections using Kerberos authentication.

Comments
No Comments »
Categories
CEU, linux, PostgreSQL, security
Comments rss Comments rss
Trackback Trackback

Introduction to Kerberos (part 1)

Bear Giles | April 17, 2016

Old protocols never die.

Kerberos is a three party protocol that provides strong mutual authentication between clients and servers. It is designed for academic and enterprise organizations where there is a single source of truth regarding identify, authentication and authorization. Think universities with faculty, students, and staff, or businesses with employees with different responsibilities. Changes must be propagated immediately – it is not acceptable for there to be a lag such as we see with X.509 digital certificates using periodically updated CRL lists. (It is possible, but expensive, to use OCSP to verify a certificate on each use.)

Strong mutual authentication is when both parties to a transaction have high confidence in the identity of the other party. A good example is when you log onto your bank’s website. The bank authenticates the user with username, password, and second factor like the user accepting the “personal image”. The user authenticates the bank by checking the HTTPS icon in the corner and the recognized “personal image”. Many of us consider the latter somewhat broken since the system will accept a certificate from ANY certificate authority preloaded into the browser. I have nothing against the Moldovian postal service but I don’t want to have to trust it in order to check my bank balance. I have nothing against it but I have no reason to trust it either. With Kerberos there’s a personal relationship – it is our employer, our university, etc., so there’s a much higher level of trust.

Kerberos came out of the Athena project at MIT in the 1980s and was “embraced and extended” by the Borg, I mean by Microsoft, in Windows 2000 as the basis of Active Directory. The “client” was traditionally a person but it is increasingly software as well.

There are three open source Kerberos implementations widely available to Linux users: MIT, Heimdal, and Shishi. MIT Kerberos 5 is the reference implementation and somewhat more tuned to enterprise users, Heimdal is an implementation made outside of the US, and I know nothing about Shishi. The fact that Heimdal was developed outside of the US was important while the US banned export under ITAR regulations (since it includes strong encryption) and may become important again and we refight the crypto wars.

I will be discussing MIT Kerberos since I’m a little more familiar with it.

The Three-Headed Hellhound

Kerberos is named after the three-headed dog that guards the gate of Hell. This reflects the three parties in a Kerberos session:

The Client

The client may be a person (student, employee) or enterprise software. The client does not log into a remote system. Instead the client logs into a local agent and that agent performs all necessary negotiation with the KDC and servers. In the protocol this is the “ticket granting ticket (TGT)”.

As mentioned earlier the client may also be enterprise software.

The Server

The server can be any software requiring authentication. Traditional servers include telnet (ktelnet) allowing remote access to a shell and NFS (allowing remote access to a filesystem). In the enterprise environment we often see kerberized web services.

The Key Distribution Center (KDC)

The key distribution center (KDC) is the trusted third party. In early implementations the KDC was literally responsible for generating symmetric keys to be used by the client and server, hence the name. I don’t know if that’s still true. The client and server are free to renegotiate their session keys at any time.

All of this occurs within a Kerberos realm. This is an arbitrary string but by convention is the domain name in all caps, e.g., INVARIANTPROPERTIES.COM. This ensures that realms will be unique (modulo bad players). Users within a realm are identified by principals. An individual may have more than one principal but no principal should be used by more than one individual. Principals have the format username@REALM or username/role@REALM, e.g., bgiles/admin@INVARIANTPROPERTIES.COM. Servers have a similar format: servicename/fqdn@REALM.

KDC installation on Debian and Ubuntu

Today we will discuss the installation and configuration of an MIT Kerberos5 KDC on an AWS EC2 instance. MIT can use either an internal database or LDAP for its backing store. The latter will be a better choice if you already use an LDAP database for user management.

I will only discuss the former in this blog entry but might revisit the latter in the future.

Hardware Requirements

The primary hardware requirement for a KDC is security. A compromised KDC will put the entire enterprise at risk. It should be physically secured and run no other services. In production we want at least two servers (one primary and at least one secondary) and they should be in physically separate locations.

It is extremely important that the clocks are synchronized across the all systems within a realm. Historically tickets would be valid for 5 to 15 minutes but with NTP running on all systems this window can be reduced to seconds. Note: this is the window when a ticket can be used for authentication and has no bearing on how long the user can use the service.

A KDC has modest computational requirements. I believe we can start with a micro instance and only upgrade if the need arises. This means the financial cost is extremely modest even if we use multiple KDCs.

We need to open three ports in our firewall. Ports 88 and 750 are required by Kerberos, port 22 for SSH is only required for initial installation and can can be blocked at the AWS Security Group level unless explicitly needed. Subsequent access can eat its own dog food with ktelnet (kerberized telnet), krsh (kerberized rsh), or SSH with Kerberos extensions.

The KDC admin server must also open port 749. Access to this port should be limited.

Debian/Ubuntu Packages

We need to install two packages: krb5-kdc and krb5-admin-server. If you are using LDAP you will want to install krb5-ldap as well. Finally we want to install krb5-doc for documentation.

  1. $ sudo apt-get install krb5-kdc krb5-admin-server krb5-doc
$ sudo apt-get install krb5-kdc krb5-admin-server krb5-doc

During installation we will be asked three things:

  • Name of our realm – this is traditionally our domain name in all-caps.
  • KDC servers – this system
  • KDC admin servers – this system

If you wish you can leave the servers blank at this time and fix the /etc/krb5.conf file below.

Database Initialization

We initialize the internal database with a call to krb5_util. We need to provide our realm name.

  1. $ sudo /usr/sbin/kdb5_util create -r INVARIANTPROPERTIES.COM -s
$ sudo /usr/sbin/kdb5_util create -r INVARIANTPROPERTIES.COM -s

This will take a minute or so to complete.

Defining user rights

We define our users’ rights via an ACL file – /etc/krb5kdc/kadm5.acl. See the man pages for details. One thing to keep in mind is that the user will remain logged in for 7 days by default. Roles with more authority should require reauthentication more frequently.

/etc/krb5dkc/kadm5.acl

  1. */admin@INVARIANTPROPERTIES.COM x  * -maxlife 4h
  2. */root@INVARIANTPROPERTIES.COM  ci *1@INVARIANTPROPERTIE.COM
  3. */root@INVARIANTPROPERTIES.COM  l  *
*/admin@INVARIANTPROPERTIES.COM x  * -maxlife 4h
*/root@INVARIANTPROPERTIES.COM  ci *1@INVARIANTPROPERTIE.COM
*/root@INVARIANTPROPERTIES.COM  l  *

See the man page for kadm5.acl for a description of these entries. The gist is that any principal with ‘admin’ has full administrative rights but must reauthenticate every 4 hours. The superuser on any system can list and update user credentials.

Creating our first administrative user(s)

We must bootstrap our administrative users on the KDC itself. Subsequent user management can be handled remotely. Note that we do NOT need the master password for this – this is why it is critical that access to the KDC be limited.

This shows how to add a user with both regular and administrative rights.

  1. $ sudo kadmin.local
  2. kadmin.local: add_principal bgiles
  3. Enter password for principal "bgiles@INVARIANTPROPERTIES.COM": *******
  4. Re-enter password for principal "bgiles@INVARIANTPROPERTIES.COM": *******
  5. kadmin.local: add_principal bgiles/admin
  6. Enter password for principal "bgiles/admin@INVARIANTPROPERTIES.COM": *******
  7. Re-nter password for principal "bgiles/admin@INVARIANTPROPERTIES.COM": *******
  8. kadmin.local: quit.
$ sudo kadmin.local
kadmin.local: add_principal bgiles
Enter password for principal "bgiles@INVARIANTPROPERTIES.COM": *******
Re-enter password for principal "bgiles@INVARIANTPROPERTIES.COM": *******
kadmin.local: add_principal bgiles/admin
Enter password for principal "bgiles/admin@INVARIANTPROPERTIES.COM": *******
Re-nter password for principal "bgiles/admin@INVARIANTPROPERTIES.COM": *******
kadmin.local: quit.

Registering our servers

If you did not specify the KDC server and KDC admin servers when we installed the packages you can fix this now. Edit the appropriate entry in the /etc/krb5.conf file:

  1. ,,,
  2. [realms]
  3.         INVARIANTPROPERTIES.COM = {
  4.                 kdc = kdc1.invariantproperties.com:88
  5.                 kdc = kdc2.invariantproperties.com:88
  6.                 admin_server = kdc1.invariantproperties.com
  7.                 default_domain = invariantproperties.com
  8.         }
  9. ...
  10. [domain_realm]
  11.         .invariantproperties.com = INVARIANTPROPERTIES.COM
  12.         invariantproperties.com = INVARIANTPROPERTIES.COM
  13. ...
,,,
[realms]
        INVARIANTPROPERTIES.COM = {
                kdc = kdc1.invariantproperties.com:88
                kdc = kdc2.invariantproperties.com:88
                admin_server = kdc1.invariantproperties.com
                default_domain = invariantproperties.com
        }
...
[domain_realm]
        .invariantproperties.com = INVARIANTPROPERTIES.COM
        invariantproperties.com = INVARIANTPROPERTIES.COM
...

This snippet demonstrates how to specify multiple KDC servers. I have not discussed how to perform database replication among multiple KDC servers. This is one task where an LDAP backing store would make much more convenient.

Preparing the Client

We are now ready to set up the client systems so the user can log into Kerberos (that is, get a ticket-granting-ticket (TGT)). I am not going to discuss setting up and accessing kerberized services at this time.

Debian/Ubuntu Packages

We need to install one package: krb5-user. We may also want to install krb5-doc for the rare user who reads the documentation before calling the help desk.

  1. $ sudo apt-get install krb5-user krb5-doc
$ sudo apt-get install krb5-user krb5-doc

You will be asked to specify the Kerberos realm, KDC and KDC admin servers.

Logging in and out

The user can now log into the system using kinit. This command has many options – see the man page for details.

  1. $ kinit
  2. Password for bgiles@INVARIANTPROPERTIES.COM: ******
  3. $ kinit bgiles/admin
  4. Password for bgiles/admin@INVARIANTPROPERTIES.COM ******
$ kinit
Password for bgiles@INVARIANTPROPERTIES.COM: ******
$ kinit bgiles/admin
Password for bgiles/admin@INVARIANTPROPERTIES.COM ******

We can determine our current credentials with klist

  1. $ klist
  2. Ticket cache: FILE:/tmp/krb5cc_1000
  3. Default principal: bgiles/admin@INVARIANTPROPERTIES.COM
  4.  
  5. Valid starting       Expires              Service principal
  6. 04/17/2016 09:40:51  04/17/2016 19:40:51  krbtgt/INVARIANTPROPERTIES.COM@INVARIANTPROPERTIES.COM
  7.     renew until 04/18/2016 09:40:46
$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: bgiles/admin@INVARIANTPROPERTIES.COM

Valid starting       Expires              Service principal
04/17/2016 09:40:51  04/17/2016 19:40:51  krbtgt/INVARIANTPROPERTIES.COM@INVARIANTPROPERTIES.COM
	renew until 04/18/2016 09:40:46

(note that the credentials expire at 10 hours, not 4 hours. I need to look into that…)

Finally we can log out with kdestroy

  1. $ kdestroy
  2. $ klist
  3. klist: Credentials cache file '/tmp/krb5cc_1000' not found
$ kdestroy
$ klist
klist: Credentials cache file '/tmp/krb5cc_1000' not found

You can change your password with kpasswd and change your active principal with kswitch.

Remote administration

Administrators can run kadmin remotely. It is no longer necessary for them to physically log into the KDC and run kadmin.local. They will be prompted to reenter their credentials since this is a sensitive operation.

Keytab files

Finally it can be inconvenient to re-enter your password every time you access a remote service. Keytab files allow you to store a Kerberos principal and encryption key for us in place of a password. You must regenerate the keytab files every time you change your Kerberos password.

Keytabs are maintained with the ktutil command.

For more information see What is a keytab, and how do I use one? from Indiana University.

Next Time

Next time I will discuss setting up kerberized services.

Finally A Cautionary Tale

I almost forgot something important – a server should never ask for a user’s Kerberos credentials. The user always performs local authentication and everything else is negotiated behind the scenes. I mention this because Apache had (and still has?) a Kerberos authentication module that worked by prompting the user for his Kerberos credentials and then logging into the system as that that user on the server in order to obatain resources from other systems. This is entirely contrary to the design goals of Kerberos and it is a huge red flag when it happens – it is possible to get “transferable” tickets that can be used by a server on your behalf.

Comments
No Comments »
Categories
CEU, linux, security
Comments rss Comments rss
Trackback Trackback

Should Developers Get Security Certs?

Bear Giles | January 7, 2016

Something that quickly stands out when you review the various security certifications is that nearly all require documented work experience. Documented primary work experience – as in your job description – and not just experience as part of a non-infosec job. (As a counterexample you might be a devop implementing a PCI-DSS-compliant system. You have to know a lot about security but your job description will be developer/architect/etc. and it will not count towards these certifications.) CompTIA is a notable exception and it catches flak for it from some people.

Does this make sense?

I think it does in the NOC. I keep coming back to an analogy to EMTs – you want someone who’s well-trained and experienced who can respond quickly and accurately on the front lines. A certificate that requires experience can greatly simplify the life of the hiring manager.

However I think that it does NOT make sense outside of the NOC. Experience is rarely a bad thing but if you’re designing and building software you don’t just need to know what the latest attacks are – you need to be able to integrate security fundamentals with development experience in order to anticipate where today’s bright idea could lead to attacks and how to avoid them. It’s not a one-legged stool where you can get by with just infosec experience or development experience. You need to have a balance.

Why would a developer want to go to the cost and effort to get a formal certification?

The job requires it

Some jobs, esp. in the defense industry, require formal certification for developer positions (IAT level II or III under DoD 8570.01-M). This is a complex situation since these jobs often require a security clearance as well and in a strong job market many developers may decide it isn’t worth the hassles.

This is not a theoretical concern to me since there’s a good match in town but it requires a TS/SCI clearance. Is it worth the hassles since there’s at least a half-dozen other positions within a few miles from home that won’t ask (and leak!) highly sensitive information in another OPM leak? Once is enough.

The job requires ongoing training

Many jobs require ongoing security training. That’s a few days to a week dedicated to learning the latest developments (e.g., drop any remaining use of SHA-1 immediately) with background training throughout the rest of the year by senior developers. You don’t have to have formal certification to perform the latter role but it makes life easier for everyone involved since it establishes a clear floor for the annual updates.

The agile methodology requires an advocate for security

The agile methodology is widely used since work is prioritized by the “product owner” on the basis of business needs. This is obviously a Good Thing since it provides the best value to the business but product owners are not security experts and might not understand the importance of security user stories. A team advocate for security can help educate the project manager and product owner. The best way to ensure the product owner will listen is to have a disinterested third party attest to his or her skills.

It’s cheaper to design it right than patch

This is true of all design methodologies – it is always cheaper to make a change early. A design with an eye towards security – esp. noticing where seemingly small changes will dramatically strengthen or weaken security – will always be cheaper to implement than a security-blind design that has to be fixed later. Once again the knowledge does not require certification but convincing non-technical people of the importance of the changes might.

It makes you a better developer and tester

Finally security training makes you a better developer and tester because it gives you a better awareness of how things can break – if we’re in a hurry we’ll often write test cases on the basis of the happy path and something that breaks every conditional branch and overlook the fact that we’re not checking everything it should. This is why test coverage is a risky metric. Security awareness means we can take a step back and ask how we would attack a piece of code if we were an attacker and that’s probably going to expose unwarranted assumptions.

The bottom line

The bottom line is that developers will care about training and experience, not pieces of paper, and the fastest way to flip the bozo bit is to come into the room demanding that people should pay attention to you just because you have of a slip of paper. The business side has different rules. They must rely on internal and external evaluations and formal certification by respected organizations is an important tool to get your message out.

If your job requires it, get a cert. If your boss’s boss cares about it, get a cert. If you need to get past the Guardians of HR with their dreaded keyword search, get a cert. Otherwise read a study guide and/or watch training videos so you keep current but only spend the money if you, yourself, would like it for your own reasons. For instance I like taking the actual exam since it forces me to study everything covered and not just the bits I find interesting or easy.

Comments
No Comments »
Categories
security
Comments rss Comments rss
Trackback Trackback

« Previous Entries

Archives

  • May 2020 (1)
  • March 2019 (1)
  • August 2018 (1)
  • May 2018 (1)
  • February 2018 (1)
  • November 2017 (4)
  • January 2017 (3)
  • June 2016 (1)
  • May 2016 (1)
  • April 2016 (2)
  • March 2016 (1)
  • February 2016 (3)
  • January 2016 (6)
  • December 2015 (2)
  • November 2015 (3)
  • October 2015 (2)
  • August 2015 (4)
  • July 2015 (2)
  • June 2015 (2)
  • January 2015 (1)
  • December 2014 (6)
  • October 2014 (1)
  • September 2014 (2)
  • August 2014 (1)
  • July 2014 (1)
  • June 2014 (2)
  • May 2014 (2)
  • April 2014 (1)
  • March 2014 (1)
  • February 2014 (3)
  • January 2014 (6)
  • December 2013 (13)
  • November 2013 (6)
  • October 2013 (3)
  • September 2013 (2)
  • August 2013 (5)
  • June 2013 (1)
  • May 2013 (2)
  • March 2013 (1)
  • November 2012 (1)
  • October 2012 (3)
  • September 2012 (2)
  • May 2012 (6)
  • January 2012 (2)
  • December 2011 (12)
  • July 2011 (1)
  • June 2011 (2)
  • May 2011 (5)
  • April 2011 (6)
  • March 2011 (4)
  • February 2011 (3)
  • October 2010 (6)
  • September 2010 (8)

Recent Posts

  • 8-bit Breadboard Computer: Good Encapsulation!
  • Where are all the posts?
  • Better Ad Blocking Through Pi-Hole and Local Caching
  • The difference between APIs and SPIs
  • Hadoop: User Impersonation with Kerberos Authentication

Meta

  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.org

Pages

  • About Me
  • Notebook: Common XML Tasks
  • Notebook: Database/Webapp Security
  • Notebook: Development Tips

Syndication

Java Code Geeks

Know Your Rights

Support Bloggers' Rights
Demand Your dotRIGHTS

Security

  • Dark Reading
  • Krebs On Security Krebs On Security
  • Naked Security Naked Security
  • Schneier on Security Schneier on Security
  • TaoSecurity TaoSecurity

Politics

  • ACLU ACLU
  • EFF EFF

News

  • Ars technica Ars technica
  • Kevin Drum at Mother Jones Kevin Drum at Mother Jones
  • Raw Story Raw Story
  • Tech Dirt Tech Dirt
  • Vice Vice

Spam Blocked

53,313 spam blocked by Akismet
rss Comments rss valid xhtml 1.1 design by jide powered by Wordpress get firefox