Securing WordPress: The Basics

This is the first in an occasional series of documents on WordPress.


WordPress is ubiquitous but fragile.  There are few alternatives that provide the easy posting, wealth of plugins, and integration of themes, while also being (basically) free to use.

It’s also a nerve-wracking exercise in keeping bots and bad actors out.  Some of the historical security holes are legendary.  It doesn’t take long to find someone who experienced a site where the comments section was bombed by a spammer, or even outright defacement.  (I will reluctantly raise my own hand, having experienced both in years past.)

Most people that use WordPress nowadays rely on 3rd parties to host it.  This document isn’t for them; hosted security is mostly outside of your control.  That’s generally a good thing: professionals are keeping you up to date and covered by best practices.

The rest of us muddle through security and updates in piece-meal fashion, occasionally stumbling over documents like this one.

Things To Look Out For

As a rule, good server hygiene demands that you keep an eye on your logs.  Tools like goaccess help you analyze usage, but nothing beats a peek at the raw logs for noticing issues cropping up.

The Good Bots

Sleepy websites like mine show a high proportion of “good” bots like Googlebot, compared to human traffic.  They’re doing good things like crawling (indexing) your site.

In my case they are the primary visitor base to my site, generating hundreds or even thousands of individual requests per day.  Hopefully your own WordPress site has a better visitor-to-bot ratio than mine.

We don’t want to block these guys from their work, they’re actually helpful.

The Bad Bots

You’ll also see bad bots, possibly lots of them.  Most are attempting to guess user credentials so they can post things on your WordPress site.

Some are fairly up-front about it:

...
132.232.47.138 [07:51:14] "POST /xmlrpc.php HTTP/1.1"
132.232.47.138 [07:51:14] "POST /xmlrpc.php HTTP/1.1"
132.232.47.138 [07:51:15] "POST /xmlrpc.php HTTP/1.1"
132.232.47.138 [07:51:16] "POST /xmlrpc.php HTTP/1.1"
132.232.47.138 [07:51:16] "POST /xmlrpc.php HTTP/1.1"
132.232.47.138 [07:51:18] "POST /xmlrpc.php HTTP/1.1"
...

They’ll hammer your server like that for hours.

Blocking their individual IP addresses at the firewall is devastatingly effective… for about five minutes.  Another bot from another IP will pop up soon.  Blocking individual IPs is a game of whack-a-mole.

Some are part of a “slow” botnet, hitting the same page from unique a IP address each time.  These are part of the large botnets you read about.

83.149.124.238 [05:01:06] "GET /wp-login.php HTTP/1.1" 200
83.149.124.238 [05:01:06] "POST /wp-login.php HTTP/1.1" 200
188.163.45.140 [05:03:38] "GET /wp-login.php HTTP/1.1" 200
188.163.45.140 [05:03:39] "POST /wp-login.php HTTP/1.1" 200
90.150.96.222 [05:04:30] "GET /wp-login.php HTTP/1.1" 200
90.150.96.222 [05:04:32] "POST /wp-login.php HTTP/1.1" 200
178.89.251.56 [05:04:42] "GET /wp-login.php HTTP/1.1" 200
178.89.251.56 [05:04:43] "POST /wp-login.php HTTP/1.1" 200

These are more insidious: patient and hard to spot on a heavily-trafficked blog.

Keeping WordPress Secure

You (hopefully) installed WordPress to a location outside of your “htdocs” document tree.  If not, you should fix that right away!  (Consider this “security tip #0” because without this you’re basically screwed.)

Security tip #1 is to make sure auto updates are enabled.  The slight risk of a botched release being automatically applied is much lower than that of having an critical security patch that is applied too late.

Like medieval door locks on your front door, there is little security advantage to running old software.

Once an exploit is patched, the prior releases are vulnerable as people deconstruct the patch and reverse-engineer the exploit(s) – assuming a exploit wasn’t published before the patch was released.

Locking WordPress Down

Your Apache configuration probably contains a section similar to this:

<Directory "/path/to/wordpress">
    ...
    Require all granted
    ...
</Directory>

We’re going to add some items between <Directory></Directory> tags to restrict access to the most vulnerable pieces.

You Can’t Attack Things You Can’t Reach

We’ll start by invoking the Principle of Least Privilege: people should only be able to do the things they must do, and nothing more.

xmlrpc.php is an API for applications to talk to WordPress.  Unfortunately it doesn’t carry extra security, so if you’re a bot it’s great to hammer with your password guesses – you won’t be blocked, and no one will be alerted.

Most people don’t need it.  Unless you know you need it, you should disable it completely.

<Directory "/path/to/wordpress">
    ...
    <Files xmlrpc.php>
        <RequireAll>
            Require all denied
        </RequireAll>
    </Files>
</Directory>

There are WordPress plugins that purport to “disable” xmlrpc.php, but they deny access from within WordPress.  That means that you’ve still paid a computational price for executing xmlrpc.php, which can be steeper than you expect, and you’re still at risk of exploitable bugs within it.  Denying access to it at the server level is much safer.

You Can’t Log In If You Can’t Reach the Login Page

This next change will block anyone from outside your LAN from logging in.  That means that if you’re away from home you won’t be able to log in, either, without tunneling back home.

<Directory "/path/to/wordpress">
    ...
    <Files wp-login.php>
        <RequireAll>
            Require all granted
            # remember that X-Forwarded-For may contain multiple
            # addresses, don't just search for ^192...
            Require expr %{HTTP:X-Forwarded-For} =~ /\b192\.168\.1\./
        </RequireAll>
    </Files>
</Directory>

If you’re not using a public-facing proxy, and don’t need to look at X-Forwarded-For, you can simplify this a little:

<Directory "/path/to/wordpress">
    ...
    <Files wp-login.php>
        <RequireAll>
            Require all granted
            Require ip 192.168.1
        </RequireAll>
    </Files>
</Directory>

This will prevent 3rd parties from signing up on your blog and submitting comments.  This may be important to you.

Restart Apache

After inserting these blocks, you should execute Apache’s ‘configtest’ followed by reload:

$ sudo apache2ctl configtest
apache2      | * Checking apache2 configuration ...     [ ok ]
$ sudo apache2ctl reload
apache2      | * Gracefully restarting apache2 ...      [ ok ]

Now test your changes from outside your network:

xmlrpc.php forbidden

Apache’s access log should show a ‘403’ (Forbidden) status:

... "GET /xmlrpc.php HTTP/1.1" 403 ...

And just like that, you’ve made your WordPress blog a lot more secure.

Interestingly, by making just these changes on my own site the attacks immediately dropped off by 90%.  I guess that the better-written bots realized that I’m not a good target anymore and stopped wasting their time, preferring lower-hanging fruit.

Bypassing a Tunnel-Broker IPv6 Address For Netflix

Surprisingly, it worked beautifully… that is, until I discovered an unintended side effect

My ISP is pretty terrible but living in the United States, as I do, effectively makes internet service a regional monopoly.  In my case, not only do I pay too much for service but certain websites (cough google.com cough) are incredibly slow for no reason other than my ISP is a dick and won’t peer with them properly.

This particular ISP, despite being very large, has so far refused to roll out IPv6.  This was annoying until I figured out that I could use this to my advantage.  If they won’t peer properly over IPv4, maybe I can go through a tunnel broker to get IPv6 and route around them.  Surprisingly, it worked beautifully.  GMail has never loaded so fast at home.

It was beautiful, that is, until I discovered an unintended side effect: Netflix stopped working.

netflix error: you seem to be using an unblocker or proxy
Despite my brokered tunnel terminating inside the United States, Netflix suspects me of coming from outside the United States.

A quick Google search confirmed my suspicion.  Netflix denies access to known proxies, VPNs, and, sadly, IPv6 tunnel brokers.  My brave new world was about to somewhat less entertaining if I couldn’t fix this.

Background

Normally a DNS lookup returns both A (IPv4) and AAAA (IPv6) records together:

$ nslookup google.com
Server:     192.168.1.2
Address:    192.168.1.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.12.142
Name:   google.com
Address: 2607:f8b0:4006:819::200e

Some services will choose to provide multiple addresses for redundancy; if the first address doesn’t answer then your computer will automatically try the next in line.

Netflix in particular will return a large number of addresses:

$ nslookup netflix.com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53

Non-authoritative answer:
Name: netflix.com
Address: 54.152.239.3
Name: netflix.com
Address: 52.206.122.138
Name: netflix.com
Address: 35.168.183.177
Name: netflix.com
Address: 54.210.113.65
Name: netflix.com
Address: 52.54.154.226
Name: netflix.com
Address: 54.164.254.216
Name: netflix.com
Address: 54.165.157.123
Name: netflix.com
Address: 107.23.222.64
Name: netflix.com
Address: 2406:da00:ff00::3436:9ae2
Name: netflix.com
Address: 2406:da00:ff00::6b17:de40
Name: netflix.com
Address: 2406:da00:ff00::34ce:7a8a
Name: netflix.com
Address: 2406:da00:ff00::36a5:f668
Name: netflix.com
Address: 2406:da00:ff00::36a5:9d7b
Name: netflix.com
Address: 2406:da00:ff00::23a8:b7b1
Name: netflix.com
Address: 2406:da00:ff00::36d2:7141
Name: netflix.com
Address: 2406:da00:ff00::36a4:fed8

The Solution

The key is to have your local DNS resolver return A records, but not AAAA, if (and only if) it’s one of Netflix’s hostnames.

Before I document the solution, it helps to know my particular setup and assumptions:

  • IPv6 via a tunnel broker
  • BIND’s named v9.14.8

Earlier versions of BIND are configured somewhat differently: you may have different options, or (if it’s a really old build) you may need to run two separate named instances.  YMMV.

Step 0: Break Out Your Zone Info (optional but recommended)

If your zone info is part of named.conf you really should put it into it’s own file for easier maintenance and re-usability. The remaining instructions won’t work, without modification, if you don’t.

# /etc/bind/local.conf
zone "." in {
        type hint;
        file "/var/bind/named.cache";
};

zone "localhost" IN {
        type master;
        file "pri/localhost.zone";
        notify no;
};

# 127.0.0. zone.
zone "0.0.127.in-addr.arpa" {
        type master;
        file "pri/0.0.127.zone";
};

Step 1: Add a New IP Address

You can run a single instance of named but you’ll need at least two IP addresses to handle responses.

In this example the DNS server’s “main” IP address is 192.168.1.2 and the new IP address will be 192.168.1.3.

How you do this depends on your distribution. If you’re using openrc and netifrc then you only need to modify /etc/conf.d/net:

# Gentoo and other netifrc-using distributions
config_eth0="192.168.1.2/24 192.168.1.3/24"

Step 2: Listen To Your New Address

Add your new IP address to your listen-on directive, which is probably in /etc/bind/named.conf:

listen-on port 53 { 127.0.0.1; 192.168.1.2; 192.168.1.3; };

It’s possible that your directive doesn’t specify the IP address(es) and/or you don’t even have a listen-on directive – and that’s ok. From the manual:

The server will listen on all interfaces allowed by the address match list. If a port is not specified, port 53 will be used… If no listen-on is specified, the server will listen on port 53 on all IPv4 interfaces.

https://downloads.isc.org/isc/bind9/9.14.8/doc/arm/Bv9ARM.ch05.html

Everything I just said also applies to listen-on-v6.

Step 3: Filter Query Responses

Create a new file called /etc/bind/limited-ipv6.conf and add the following at the top:

view "internal-ipv4only" {
        match-destinations { 192.168.1.3; };
        plugin query "filter-aaaa.so" {
                # don't return ipv6 addresses
                filter-aaaa-on-v4 yes;
                filter-aaaa-on-v6 yes;
        };
};

What this block is saying is, if a request comes in on the new address, pass it through the filter-aaaa plugin.

We’re configuring the plugin to filter all AAAA record replies to ipv4 clients (filter-aaaa-on-v4) and ipv6 clients (filter-aaaa-on-v6).

Now add a new block after the first block, or modify your existing default view:

# forward certain domains back to the ipv4-only view
view "internal" {
        include "/etc/bind/local.conf";

        # AAAA zones to ignore
        zone "netflix.com" {
                type forward;
                forward only;
                forwarders { 192.168.1.3; };
        };
};

This is the default view for internal clients. Requests that don’t match preceding views fall through here.

We’re importing the local zone from step 0 (so we don’t have to maintain two copies of the same information), then forwarding all netflix.com look-ups to the new IP address, which will be handled by the internal-ipv4only view.

Step 4: Include the New Configuration File

Modify /etc/bind/named.conf again, so we’re loading the new configuration file (which includes local.conf).

#include "/etc/bind/local.conf";
include "/etc/bind/limited-ipv6.conf";

Restart named after you make this change.

Testing

nslookup can help you test and troubleshoot.

In the example below we call the “normal” service and get both A and AAAA records, but when we call the ipv4-only service we only get A records:

$ nslookup google.com 192.168.1.2
Server:         192.168.1.2
Address:        192.168.1.2#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.3.110
Name:   google.com
Address: 2607:f8b0:4006:803::200e

$ nslookup google.com 192.168.1.3
Server:         192.168.1.3
Address:        192.168.1.3#53

Non-authoritative answer:
Name:   google.com
Address: 172.217.3.110

 

Failed to retrieve directory listing

filezilla connection log with "failed to retrieve directory listing" error
Filezilla’s opaque error

I occasionally run a local vsftp daemon on my development machine for testing.  I don’t connect to it directly — it’s used to back up unit tests that need an FTP connection.  No person connects to it, least of all me, and the scripts that do connect are looking at small, single-use directories.

I needed to test a new feature: FTPS, aka FTP with SSL (Not to be confused with SFTP, a very different beast.)  Several of our vendors will be requiring it soon; frankly, I’m surprised they haven’t required it sooner.  But I digress.

To start this phase of the project I needed to make sure that my local vsftp daemon supports FTPS so that I can run tests against it.  So I edit /etc/vsftpd/vsftpd.conf to add some lines to my config, and restart:

rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES

But Filezilla bombs with an opaque error message:

Status: Resolving address of localhost
Status: Connecting to 127.0.0.1:21...
Status: Connection established, waiting for welcome message...
Status: Initializing TLS...
Status: Verifying certificate...
Status: TLS connection established.
Status: Logged in
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/home/dad" is the current directory
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: PASV
Response: 227 Entering Passive Mode (127,0,0,1,249,239).
Command: LIST
Response: 150 Here comes the directory listing.
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Disconnected from server: ECONNABORTED - Connection aborted
Error: Failed to retrieve directory listing

I clue in pretty quickly that “GnuTLS error -15: An unexpected TLS packet was received” is actually a red herring, so I drop the SSL from the connection and get a different error:

Response: 150 Here comes the directory listing.
Error: Connection closed by server
Error: Failed to retrieve directory listing

Huh, that’s not particularly helpful, shame on you Filezilla.  I drop down further to a command-line FTP client to get the real error:

$ ftp localhost
Connected to localhost.
220 (vsFTPd 3.0.3)
Name (localhost:dad): 
530 Please login with USER and PASS.
530 Please login with USER and PASS.
SSL not available
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
421 Service not available, remote server has closed connection
ftp> quit

Ah.  Now we’re getting somewhere.

A quick perusal turned up a stackexchange answer with the assertion that “the directory causing this behaviour had too many files in it (2,666).”  My own directory is much smaller, about a hundred files.  According to this bug report, however, the real maximum may be as few as 32 files.  It’s not clear to me whether this is a kernel bug, a vsftpd bug, or just a bad interaction between recent kernels and vsftpd.

Happily, there is a work-around: add “seccomp_sandbox=NO” to vsftpd.conf.

Since vsftpd’s documentation is spare, and actual examples are hard to come by, here’s my working config:

listen=YES
local_enable=YES
write_enable=YES
chroot_local_user=YES
allow_writeable_chroot=YES
seccomp_sandbox=NO
ssl_enable=YES
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem

vim, screen, and bracketed paste mode

A little while back an update was introduced, somewhere, that has been driving me nuts.  I didn’t record exactly when it happened or what changed.  I suppose it doesn’t matter now.

The behavior wasn’t easy to pin down at first since it was the confluence of several things: 1) pasting 2) into vim while 3) using a non-xterm terminal like mate-terminal and 4) inside a screen session.

The behavior exhibits in several ways:

  • Pastes appear to be incomplete, or (more correctly) some number of characters at the beginning of the paste go “missing” and actually become commands to vim
  • Pastes are complete but they’re bracketed with \e[200~content\e[201~
    • some people report 0~content1~ instead, but it appears to be the same phenomenon

What’s going on?  It’s a feature called “bracketed paste mode”.  You can google it read up on it, it has some utility.  As far as I can tell it’s related to readline.  But more importantly, there is a fix.

Add this to your ~/.vimrc:

" fix bracketed paste mode
if &term =~ "screen"
  let &t_BE = "\e[?2004h"
  let &t_BD = "\e[?2004l"
  exec "set t_PS=\e[200~"
  exec "set t_PE=\e[201~"
endif

source: https://vimhelp.appspot.com/term.txt.html#xterm-bracketed-paste

WordPress Error: cURL error 6: Couldn’t resolve host ‘dashboard.wordpress.com’

Background:

I maintain a WordPress blog that uses Jetpack’s Stats package.

Issue:

We started getting this error message when opening the ‘Stats’ page:

We were unable to get your stats just now. Please reload this page to try again. If this error persists, please contact support. In your report please include the information below.

User Agent: 'Mozilla/5.0 (X11; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0'
Page URL: 'https://blog.server.tld/wp-admin/admin.php?page=stats&noheader'
API URL: 'https://dashboard.wordpress.com/wp-admin/index.php?noheader=true&proxy&page=stats&blog=XXX&charset=UTF-8&color=fresh&ssl=1&j=1:5.0&main_chart_only'
http_request_failed: 'cURL error 6: Couldn't resolve host 'dashboard.wordpress.com''

The entire Stats block in the Dashboard was empty, and the little graph that shows up in the Admin bar on the site was empty as well.

Other errors noticed:

RSS Error: WP HTTP Error: cURL error 6: Couldn't resolve host 'wordpress.org'
RSS Error: WP HTTP Error: cURL error 6: Couldn't resolve host 'planet.wordpress.org'

These errors were in the WordPress Events and News section, which was also otherwise empty.

This whole thing was ridiculous on it’s face, as the hosts could all be pinged successfully from said server.

I checked with Jetpack’s support, per the instructions above, and got a non-response of “check with your host.”  Well, this isn’t being run on a hosting service so you’re telling me to ask myself.  Thanks for the help anyway.

Resolution:

The machine in question had just upgraded PHP, but Apache had not been restarted yet. The curl errors don’t make much sense, but since when does anything in PHP make sense?

It was kind of a “duh!” moment when I realized that could be the problem.  Restarting Apache seems to have solved it.

NiFi HTTP Service

I’m attempting to set up an HTTP server in NiFi to accept uploads and process them on-demand.  This gets tricky because I want to submit the files using an existing web application that will not be served from NiFi, which leads to trouble with XSS (Cross-Site Scripting) and setting up CORS (Cross Origin Resource Sharing [1]).

The trouble starts with just trying to PUT or POST a simple file.  The error in Firefox reads:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource (Reason: CORS header 'Access-Control-Allow-Origin' missing).

You can serve up the Javascript that actually performs the upload from NiFi and side-step XSS, but you may still run into trouble with CORS.  You’ll have trouble even if NiFi and your other web server live on the same host (using different ports, of course), as they’re considered different hosts for the purposes of XSS prevention.

handlehttpresponse screen shot
HandleHttpResponse processor config

To make this work, you’ll need to enable specific headers in the HandleHttpResponse processor.  Neither the need to set some headers, nor the headers that need to be set, are documented by NiFi at this time (so far as I can tell).

  1. Open the configuration of the HandleHttpResponse processor
  2. Add the following headers and values as properties and values, but see below for notes regarding the values
    Access-Control-Allow-Origin: *
    
    Access-Control-Allow-Methods: PUT, POST, GET, OPTIONS
    
    Access-Control-Allow-Headers: Accept, Accept-Encoding, Accept-Language, Connection, Content-Length, Content-Type, DNT, Host, Referer, User-Agent, Origin, X-Forwarded-For

You may want to review the value for Access-Control-Allow-Origin, as the wildcard may allow access to unexpected hosts.  If your server is public-facing (why would you do that with NiFi?) then you certainly don’t want a wildcard here.  The wildcard makes configuration much simpler if NiFi is strictly interior-facing, though.

The specific values to set for Access-Control-Allow-Methods depend on what you’re doing.  You’ll probably need OPTIONS for most cases.  I’m serving up static files so I need GET, and I’m receiving uploads that may or may not be chunked, so I need POST and PUT.

The actual headers needed for Access-Control-Allow-Headers is a bit variable.  A wildcard is not an acceptable value here, so you’ll have to list every header you need separately — and there are a bunch of possible headers.  See [3] for an explanation and a fairly comprehensive list of possible headers.  Our list contains a small subset that covers our basic test cases; your mileage may vary.

You may also want to set up a RouteOnAttribute processor to ignore OPTIONS requests (${http.method:equals('OPTIONS')}), otherwise you might see a bunch of zero-byte files in your flow.

References:

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

[2] http://stackoverflow.com/questions/24371734/firefox-cors-request-giving-cross-origin-request-blocked-despite-headers

[3] http://stackoverflow.com/questions/13146892/cors-access-control-allow-headers-wildcard-being-ignored

“ERROR: … failed to process… ” in NiFi

I was greeted by a few cryptic things in NiFi this morning during my morning check-in.

  1. A PutSQL processor was reporting an error:
    "ERROR: PutSQL[id=$UUID>]failed to process due to java.lang.IndexOutOfBoundsException: Index: 1, Size: 1; rolling back session: java.lang.IndexOutOfBoundsException: Index: 1, Size: 1"
  2. There were no recent errors counted in the LogAttribute counter we set up to record errors;
  3. The Tasks/Time count in the PutSQL processor was though the roof, despite the errors and lack of successes.

Needless to say, the processor was all bound up and a number of tasks were queued.  Not a good start to my day.

I checked the data provenance and didn’t see anything remarkable about the backed-up data.  The error message suggests (to me) that the first statement parameter is at fault, and that parameter happened to be a date (which has been problematic for me in NiFi with a MySQL backend).  Neither that value, nor the rest of the values, were remarkable or illegal for the fields they’re going into.

It wasn’t until I spent some time looking over the source data that I saw the problem: there is a duplicate key in the data.  This error is NiFi’s way of complaining about it.

In our case the underlying table doesn’t have good keys, or a good structure in general, and I’m planning to replace it soon anyway, but updating the primary keys to allow the duplicate data (because it IS valid data, despite the table design) has solved the issue.

NiFi Build Error

I’m testing NiFi out on my local Gentoo installation to prepare for an implementation at work, and after a rather lengthy build/test process (“ten minutes” my fanny) ran into this error:

$ mvn clean install
[INFO] Scanning for projects...
...
'Script Engine' validated against 'ECMAScript' is invalid because Given value not found in allowed set 'Groovy, lua, python, ruby'

This error left me scratching my head.  Nothing related to JavaScript/ECMAScript dependencies were mentioned anywhere.  How would you get it, anyway?  Webkit, I suppose…

Sudden epiphany: this is a new Gentoo installation, and this program, including the build script, is running Java.  Gentoo doesn’t install Sun Oracle’s Java by default, but instead comes with IcedTea out of the box.  It’s acceptable for some simple uses, but is buggy for any complex. (Minecraft is a great example where it just doesn’t work.)  I haven’t used Java for anything yet, so I hadn’t installed the JDK yet.  The build instructions specify JDK 1.7 or higher, but I didn’t think anything of it because I’m used to just having it installed.

echo "dev-java/oracle-jdk-bin Oracle-BCLA-JavaSE" \
  >> /etc/portage/package.license/file
emerge -av dev-java/oracle-jdk-bin
...
$ mvn clean install
[INFO] Scanning for projects...
...
[INFO] BUILD SUCCESS

Finally!

Bridging Wired and Wireless Networks, Gentoo-style

I want my wired and wireless networks to share a single 192.168.1.x address space (instead of separate 192.168.0.x and 192.168.1.x addresses).

In order to do that, we need to set up a bridge to merge disparate networks into a single space.

Part 1: The Basic Configuration

ADMtek NC100 (uses tulip driver)
Ralink RT61 PCI (uses rt61pci driver)
hostapd
linux 4.1.15-gentoo-r1
net-misc/bridge-utils 1.5
net-wireless/iw 3.17

Part 2: Making It Work

I started out creating a basic bridge, using the Gentoo Wiki as a guide:

cd /etc/init.d
ln -s net.lo net.br0

/etc/init.d/net.br0 start

There’s no need to change how hostapd starts; it still talks to wlan0 (not br0).

# /etc/conf.d/net

modules_wlan0="!iwconfig !wpa_supplicant"
config_wlan0="null"
config_eth0="null"
config_br0="192.168.1.1/24"
brctl_br0="setfd 0
sethello 10
stp off"
bridge_br0="eth0 wlan0"

The Problem

The above config is naive and doesn’t work right.  I got this error:

Can't add wlan0 to bridge br0: Operation not supported

Huh.  There’s nothing indicative in dmesg about the error, the last entry shows the bridge being created on the wired card and then being taken down.  Just to be sure, I created a bridge with just eth0 and it worked:

$ brctl show
bridge name   bridge id           STP enabled   interfaces
br0           8000.00045a42a698   no            eth0

After casting about a bit, I found a serverfault.com page that pointed to this fix:

$ iw dev wlan0 set 4addr on
$ brctl addif br0 wlan0

That works, but that won’t do me much good as a long-term solution.  I would need to pay a visit to the basement after every planned reboot and unplanned power outage, or else nobody can get onto the network.

( More about the 4addr option here. )

You can’t just add the option to modules_wlan0, it doesn’t work that way.  A quick visit back to the wiki suggested the solution, though, which is to define a preup function where we can execute arbitrary commands.

The Working Config

These statements are in addition to the WAN interface config:

# /etc/conf.d/net
modules_wlan0="!iwconfig !wpa_supplicant"
config_wlan0="null"
config_eth0="null"
config_br0="192.168.1.1/24"
brctl_br0="setfd 0
sethello 10
stp off"
bridge_br0="eth0 wlan0"

preup() {
    # br0 uses wlan0, and wlan0 needs to set the
    # 4addr option before being used on a bridge
    if echo "${IFACE}" | grep -q 'br0' ; then
        /usr/sbin/iw dev wlan0 set 4addr on
    fi

    return 0
}

Then do all the accounting to clean up:

rc-update add net.br0 default
rc-update del net.eth0 default
rc-update del net.wlan0 default

I also had to update my iptables config to refer to br0 instead of eth0 and wlan0.

Finally, a reboot to test that everything starts properly.