EE Certificate Too Weak

If you are using some monitoring tools like Nagios, you might have run into this error – Cannot verify certificate: EE Certificate Too Weak. It means that the matching key is too short for todays standards. I found it myself when having SSL certificate check watching for expiring certs on iDRAC interface of Dell R510.

What is funny that it was happening only for 2 out of 3 of them and it wasn’t fixed by upgrading to the latest iDRAC version. You can’t set the key size from the web interface either. You need to use racadm either locally or via SSH session to iDRAC itself. First lets check the key size that’s currently set:

/admin1-> racadm getconfig -g cfgracsecurity -o cfgRacSecCsrKeySize
1024

And then change it to bigger value and confirm:

/admin1-> racadm config -g cfgRacSecurity -o cfgRacSecCsrKeySize 2048
Object value modified successfully
/admin1-> racadm getconfig -g cfgracsecurity -o cfgRacSecCsrKeySize
2048

After that you are ready to generale CSR again, this time with a longer key. Please note that this significantly increases time required for that task to finish, but it doesn’t affect performance of iDRAC interface later.

Why I switched to WordPress?

I was quite fond of my Jekyll setup, but… I have moved between multiple workstations in short period of time. Every time I did it I had to setup build chain for Jekyll from scratch again. Then I had to remind myself how to publish new entry (because it involved building it locally and then rsyncing out to the server). And it was all for tiny blog of no great importance. So it was basically hindering my publication process and on many occasions I decided not to post as it was taking too much time and usually I had some other failures. The straw that broke camel back was moving to MacOS where I had tons of issues with Ruby and Jekyll dependencies. So I finally gave up and spun up just WordPress container. It took me like 2 hours to move all entries across including import of media. And it was great opportunity to polish entries and remove some typos.

And probably many of you will tell my I could’ve fixed my publication process and stayed with Jekyll but I figured out it is not the tool important but the content (how arrogant!). So I hope you will enjoy your stay no matter the platform.

Smokeping with NGINX

My humble little webserver (which hosts used to host this site) is using NGINX. It works exceptionally well for my needs. I also have a Smokeping running on it all the time to monitor my ISP connectivity.

As link quality has deteriorated recently I wanted to review relevant graphs. It turns out that Smokeping is still using CGI in 2020 (d’oh). NGINX does not support it so the idea was to spin up CGI to FastCGI bridge and serve the content via fastcgi_pass. To have it working on Ubuntu 18.04 LTS you need to add following lines to your virtual host definition in /etc/nginx/sites-enabled:

location /smokeping {
        alias /usr/share/smokeping/www;
        try_files $uri $uri/ =404;

        location /smokeping/smokeping.cgi {
                fastcgi_pass  localhost:9999;

                fastcgi_param QUERY_STRING    $query_string;
                fastcgi_param REQUEST_METHOD  $request_method;
                fastcgi_param CONTENT_TYPE    $content_type;
                fastcgi_param CONTENT_LENGTH  $content_length;
        }

        allow 192.168.0.0/24;
        deny all;
}

Please note it also limits access to your LAN only (assuming it is 192.168.0.0/24). Then install libfcgi-bin

apt update && apt install libfcgi-bin

And spin up the bridge from the unprivileged user that has write access to /var/lib/smokeping/:

sudo -u smokeping -- /usr/bin/cgi-fcgi -start -connect 127.0.0.1:9999 /usr/share/smokeping/smokeping.cgi

And that’s all. I use the web frontend rarely so I spin it manually. You can make it persistent using /etc/rc.local or systemd if you need it more often.

Time Machine on Ubuntu 18.04

Due to the recent pandemic outbreak I had to switch to working from home. At the office I am using MacBook Pro which gets backed up to the dedicated Time Capsule device. It works as expected, nothing extra there. Unfortunately at home I didn’t have such contraption, but I had a custom built NAS based on Ubuntu 18.04 LTS. It turns out that you can emulate Time Machine by using Samba, just need to build it yourself as the one shipped lacks Spotlight support.

Please note that I based this tutorial on Mac TimeMachine with Samba 4.8.* on Ubuntu 18.10 and added some changes to have it working on clean Ubuntu 18.04 LTS. I also changed Samba 4.8.0 to 4.9.18 due to security issues which also fixed compatibility with libtracker-sparql.

First step is to install tools and dependencies required

apt update
apt install -y libreadline-dev git build-essential libattr1-dev libblkid-dev autoconf python-dev python-dnspython libacl1-dev gdb pkg-config libpopt-dev libldap2-dev dnsutils acl attr libbsd-dev docbook-xsl libcups2-dev libgnutls28-dev tracker libtracker-sparql-2.0-dev libpam0g-dev libavahi-client-dev libavahi-common-dev bison flex avahi-daemon liblmdb-dev libjansson-dev libgpgme-dev libarchive-dev

Create file /etc/avahi/services/timemachine.service with following contents:

<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_smb._tcp</type>
   <port>445</port>
 </service>
<service>
<type>_device-info._tcp</type>
   <port>0</port>
   <txt-record>model=TimeCapsule8,119</txt-record>
</service>
<service>
<type>_adisk._tcp</type>
<txt-record>sys=waMa=0,adVF=0x100</txt-record>
<txt-record>dk0=adVN=TimeMachine Home,adVF=0x82</txt-record>
</service>
</service-group>

Download Samba source code archive and verify it. You need to unzip the tar file first as tar file is signed and not gzipped file (which is a bit odd).

cd /usr/src
wget https://download.samba.org/pub/samba/samba-pubkey.asc
gpg --import samba-pubkey.asc
wget https://download.samba.org/pub/samba/stable/samba-4.9.18.tar.gz
gunzip samba-4.9.18.tar.gz
wget https://download.samba.org/pub/samba/stable/samba-4.9.18.tar.asc
gpg --verify samba-4.9.18.tar.asc

After confirming its fine, unpack it and build it. On Atom N2800 with 4GB RAM and using pendrive as a storage it takes bit shy of 1 hour

tar -xzvf samba-4.9.18.tar.gz
cd samba-4.9.18
./configure --sysconfdir=/etc/samba --systemd-install-services --with-systemddir=/lib/systemd/system --with-shared-modules=idmap_ad --enable-debug --e
nable-selftest --with-systemd --enable-spotlight --jobs=nproc --all
make --jobs=nproc --all
make install --jobs=nproc --all

Create Samba configuration file /etc/samba/smb.conf. Please adjust path under TimeMachine Home to point at the directory where you want to keep backups. In my case it was /zstorage/backups/timemachine which is stored on ZFS.

[global]
## Basic samba configuration

server role = standalone server
passdb backend = tdbsam
obey pam restrictions = yes
security = user
printcap name = /dev/null
load printers = no
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=524288 SO_SNDBUF=524288
server string = Samba Server %v
map to guest = bad user
dns proxy = no
wide links = yes
follow symlinks = yes
unix extensions = no
acl allow execute always = yes
log file = /var/log/samba/%m.log
max log size = 1000


# Special configuration for Apple's Time Machine
fruit:model = MacPro
fruit:advertise_fullsync = true
fruit:aapl = yes


# Define your shares here
[TimeMachine Home]
path = /zstorage/backups/timemachine/%U
valid users = %U
writable = yes
durable handles = yes
kernel oplocks = no
kernel share modes = no
posix locking = no
vfs objects = catia fruit streams_xattr
ea support = yes
browseable = yes
read only = No
inherit acls = yes
fruit:time machine = yes
fruit:aapl = yes
spotlight = yes
create mask = 0600
directory mask = 0700
comment = Time Machine

Create logs location and storage subdirectory (matching what you defined in previous step). Change USERNAME to your users name. Please note that this user needs to exist on the system running Samba daemon.

mkdir -p /var/log/samba
mkdir -p /zstorage/backups/timemachine/
mkdir -m 700 /zstorage/backups/timemachine/USERNAME
chown USERNAME /zstorage/backups/timemachine/USERNAME

Update password for the user

/usr/local/samba/bin/smbpasswd -a $USERNAME

Add Samba binaries path to /etc/profile and re-read the file

echo 'export PATH=/usr/local/samba/bin/:/usr/local/samba/sbin/:$PATH' >> /etc/profile source /etc/profile

Apply a fix to systemd service definition (without it smbd won’t start correctly) and then start the daemon

sed -i 's/Type=notify/Type=simple/g' /lib/systemd/system/smb.service systemctl enable --now smb.service

If you have a firewall running you need to allow port 445 access, for ufw it would be (assuming 192.168.0.0/24 is your LAN)

ufw allow from 192.168.0.0/24 to any port 445

Now you can setup your MacBook to use it the same way as it uses Time Capsule.

Using SoloKey on vanilla Ubuntu

It has been almost a year since last time I posted here so today we will have a quick one. I recently bought a SoloKey U2F key. It is a much cheaper and opensorce replacement for YubiKey (which is also a great product – I had a chance to use it for almost 2 years and it was flawless). In vanilla Ubuntu 18.04 when you plugin the key it gets detected correctly:

[Tue Feb 19 21:13:28 2019] usb 1-1: new full-speed USB device number 8 using xhci_hcd
[Tue Feb 19 21:13:28 2019] usb 1-1: New USB device found, idVendor=0483, idProduct=a2ca
[Tue Feb 19 21:13:28 2019] usb 1-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[Tue Feb 19 21:13:28 2019] usb 1-1: Product: Solo
[Tue Feb 19 21:13:28 2019] usb 1-1: Manufacturer: Solo Keys
[Tue Feb 19 21:13:28 2019] usb 1-1: SerialNumber: 0123456789ABCDEF
[Tue Feb 19 21:13:28 2019] hid-generic 0003:0483:A2CA.0008: hiddev1,hidraw3: USB HID v1.11 Device [Solo Keys Solo] on usb-0000:00:14.0-1/input0

Unfortunately permissions for device are incorrect and the key won’t be accessible from the browser:

$ ls -la /dev/hidraw*
crw------- 1 root root 243, 0 lut 19 21:03 hidraw0
crw------- 1 root root 243, 1 lut 19 21:03 hidraw1
crw------- 1 root root 243, 2 lut 19 21:03 hidraw2
crw------- 1 root root 243, 3 lut 19 21:13 hidraw3

To fix it we need a simple udev rule in /etc/udev/rules.d/90-solokey.rules:

KERNEL=="hidraw*", SUBSYSTEM=="hidraw", MODE="0664", GROUP="plugdev", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="a2ca"

And then reload the udev rules:

sudo udevadm control --reload-rules

Thanks to that rule it will have group set to plugdev and read/write permissions for that device. By default user created in Ubuntu is a member of such group so it will work out of the box:

$ id
uid=1000(alchemyx) gid=1000(alchemyx) grupy=1000(alchemyx),4(adm),20(dialout),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),126(sambashare),128(kismet)

Now you can test it by going to demo.yubico.com using Google Chrome. Click Next on first page and allow access to the key. The LED on SoloKey should turn amber and after clicking its hardware button it will get registered correctly.

If you want to do the same from the Firefox you need to reconfigure it first. Open about:config URL and confirm that you accept the risk. Find security.webauth.u2f there and flip it to True.

Misecu IPC-DM07-20SC IP camera with ZoneMinder

Our home was guarded by 6 analog low quality cameras connected over the coaxial cable to the Chinese DVR. At some point it broke down, apparently because of overheating. I could buy another one for less than $100 but I would be stuck with analog again. On the other hand those hybrid recorders that support both analog and digital are quite expensive and not very powerful.

I was reminded by a colleague that there was a ZoneMinder – open source solution that can replace DVR. My friend donated unused PC and I reused old HDD salvaged from the broken DVR. I only had to buy a PCI card that would be able to capture the picture from analog source. I found one used for about $80. It was Kodicom 8800 clone. It worked out of the box but I never was able to achieve color picture. It wasn’t such big issue because my plan was to replace all analog cameras with IP at some point.

I wanted to start with something that wouldn’t cost me an arm and a leg. Unfortunately it is very hard to find any information about those cheap Chinese cameras that are sold on AliExpress. Sellers only state if they are ONVIF conformant. So I took a shot and bought Misecu IPC-DM07-20SC for less than $30 including PoE power adapter (injector). It supports 1080p resolution which was desired because I wanted to see how much my DVR server can handle.

The camera comes with one page instruction that directs you to their software (included on a CD) that could help you configure the camera. But I was using Linux and that was a problem. When navigating to the camera default IP it gives you Chinese interface and the seller couldn’t help me with that. Apparently if you use anything else than Internet Explorer it defaults to Chinese because JS is broken. I was able to login from IE from my wifes laptop but there were almost no options there to work with. So I installed their software, configured networking and removed annoying overlays with camera name and date. It is best to disable all unnecessary settings because everything will be handled via ONVIF so it would be treated as a dumb picture source.

After configuring it and connecting it with the same subnet as ZoneMinder (this is important!) I was able to click ONVIF in Add new monitor and after some time it gets detected properly. Picture quality is decent albeit not the best I saw from 1080p camera. But hey – it was less than $30 so what did you expect? 🙂

Here are pictures from the night and from the cloudy day. They were taken from ZoneMinder archive so quality might be lower than provided by camera itself. Don’t mind the glare during the night – it comes from the lamp mounted just few inches from the lens.

Day
Night

Please note that this camera has module IPG-50HV20PES-S manufactured by Hangzou Xiongmai Technology. I guess there are other manufacturers using this parts so it should also work with different models. I updated ZoneMinder wiki to spread the word. In the future I want to test some other models especially with PTZ function. Stay tuned!

Microsoft Windows 10 ISO doesn’t work with Rufus anymore

Rufus is a great tool to create bootable USB drives and Windows To Go installations. Unfortunately at some point Microsoft changed something and ISO obtained with Media Creation Tool won’t work – it is described on Rufus GitHub page. To fix it just go to the download page using browser with fake User-Agent or download it under Linux or other non-Microsoft operating system. You don’t need valid product key for that.

Sonoff TH16 with OpenHab

After an encouraging success with Sonoff Touch I ordered the TH16 model. It is designed to be hidden out of sight (as it is ugly and with tiny button) and it has additional 2.5mm port to connect a sensor. I chose AM2301 model which provides temperature and humidity measurements.

We begin with flashing the device following my guide for Sonoff Touch. After that we need to connect firmly the sensor. Please note that the plug needs to be flush with the casing, because it won’t work otherwise (it needs to loudly click when connected). I lost some time before figuring it out.

Then go to the webpage of you Sonoff, click Configuration, then Configure Module. While there change Module type to 04 Sonoff TH and GPIO 14 Sensor to 02 AM2301. After clicking Save it will reboot and you will be able to see temperature and humidity on main page of the Sonoff. If it doesn’t show up you need to check your cabling.

Having that working properly we need to go to Configure MQTT under Configuration. For my setup I chose a meaningful name like sonoff-road and sonoff/%prefix%/%topic%/ as Full Topic. Clicking Save will reboot the device again and you should be getting MQTT messages published to your broker. But this time they are quite different than those sent on simple power change. The topic is tele instead of stat and it sends JSON formatted data, like in following example caught with mosquitto_sub -d -t "#":

{"Time":"2018-02-24T22:11:10", "AM2301":{"Temperature":20.8, "Humidity":41.1}, "TempUnit":"C"}

It is published with the topic sonoff/tele/sonoff-road/SENSOR.

To use this data correctly we need to install JSONPath transform from the PaperUI of the OpenHab. Navigate to Add-ons and from Transformations tab click Install besides JSONPath item. Now we are ready to use it in .item file as follows:

Number ExternalHumidity "hum [%.1f]"
{mqtt="<[mymqtt:sonoff/tele/sonoff-road/SENSOR:state:JSONPATH($.AM2301.Humidity)]" }

For sake of completeness here is a item for handling power state:

Switch LightRoad "Road Lights" (gLightsCeiling) [ "Lighting" ]
{mqtt=" >[mymqtt:sonoff/cmnd/sonoff-road/power:command:*:default],
<[mymqtt:sonoff/stat/sonoff-road/POWER:state:default]" }

Flashing knockoff Arduino Mega 2560

I usually use AliExpress as my first choice for DIY parts and I bought there Arduino Mega 2560 clone. My plan was to use it with the RFlink software to receive 433 MHz signals (and who knows – maybe send them). It was mostly for temperature and humidity sensors which readings then would be fed to OpenHab. Unfortunately I missed the information that this particular clone was equipped with CH341 serial to USB chip instead of the usual Atmega 16u2. And that was bad news because it is known for its issues with drivers and quality in general. For me it was disconnecting the serial port all the time and when I tried flashing it I was got following error message:

avrdude: stk500v2_ReceiveMessage(): timeout
avrdude: safemode: Verify error - unable to read hfuse properly. Programmer may not be reliable.
avrdude: safemode: To protect your AVR the programming will be aborted
avrdude: stk500v2_ReceiveMessage(): timeout

It was still misbehaving even if I changed USB ports or boooted into Windows. Thankfuly I had USB TTL serial converter based on PL2303 lying around. So I hooked it up this way (I created Fritzing diagram but a table seems to be much more readable):

USB TTLArduino Mega 2560
+5VVin
GNDGND
RXDTX0 (D1)
TXDRX0 (D0)
3V3Not Connected
USB TTL to Arduino Mega 2560 connection

Now I had stable serial connection and tried flashing again and it worked flawlessly. Here is how to do it using Ubuntu 17.10 environment which has avrdude in official repositories. Start by installing it:

apt-get install avrdude

Then download the latest RFlink software, unpack it somewhere and change to the directory containing RFLink.cpp.hex file. After that you can flash your device (please note that you might need to change /dev/ttyUSB0 to something used on your system, you can use dmesg to find out):

avrdude -v -p atmega2560 -c stk500 -P /dev/ttyUSB0 -b 115200 -D -U flash:w:RFLink.cpp.hex:i -F

Because we are using TX/RX lines directly there is nothing that could reset the Arduino to put into the flashing mode when required. You need to do it manually at some point after starting the process. I usually push the reset button after avrdude prints the message Overriding Baud Rate but YMMV.

Please note that I added -F switch at the end of avrdude command. It was because my Mega came with incorrect Device Signature set to some random hex (0x303030 if I remember correctly). Using -F forces it to flash it anyway and after that device should have the proper ID – 0x1e9801. On the next flashing the -F should be no longer necessary.

And next time – do buy Atmega 16u2 based Arduino Mega 2560. It is just two bucks more. It can be recognized by additional set of ICSP pins in the top left corner (near the USB port). It is clearly visible on a picture in Official Arduino Store. If in doubt always ask the seller.

Flashing Sonoff Over The WiFi

This is a quick guide to installing the alternative firmware on the Sonoff Touch using only the WiFi. No soldering is required, just a laptop with a wireless adapter. I wanted to reflash this device for two reasons – to connect it to the OpenHAB via MQTT and to get rid of Itead closed source cloud enabled software. All steps below were carried out on a Ubuntu 17.10 although I am pretty sure they should also be working fine on a Ubuntu 16.04.

Preparing the tools

$ sudo apt install python3-pip python3-dev 

After that you need to get the latest PIP which is not bundled with Ubuntu. You could do it either via the virtualenv or just on your account. I went with the second option

$ python3 -m pip install --upgrade pip 

After that you can clone the SonOTA repository

$ git clone https://github.com/mirko/SonOTA.git 

Before you start flashing you need to make sure that you don’t have any firewall rules on INPUT that could cause trouble. If using ufw you need to check if it is off

$ sudo ufw status
Status: inactive

Flashing

When the environment is set up you can go into SonOTA directory and start the process:

$ cd SonOTA ; ./sonota.py

It will ask you about the IP address you wanted to use. Make sure it is the one on the interface connected to the same WiFi that Sonoff will be using. Also type in SSID name and the password. It would look like this (please bear in mind your IP will be different)

Current IPs: ['192.168.77.19']
Select IP address of the WiFi interface:
0: 192.168.77.19
Select IP address [0]:
WiFi SSID: YourWifiSSID
WiFi Password: YourWifiPassword

Using the following configuration:
Server IP Address: 192.168.77.19
WiFi SSID: YourWifiSSID
WiFi Password:
Platform: linux
** Now connect via WiFi to your Sonoff device.
** Please change into the ITEAD WiFi network (ITEAD-100001XXXX). The default password is 12345678.
To reset the Sonoff to defaults, press the button for 7 seconds and the light will start flashing rapidly.
** This application should be kept running and will wait until connected to the Sonoff...

At this point you need to look for a WiFi network name starting with ITEAD- and followed by some numbers. Unfortunately this wasn’t the case for me. That network wasn’t showing up at all. So I had to reset the whole device by touching the button on the Sonoff and waiting for about 7 seconds (until it started to flash its LED rapidly). After that the network showed up and I was able to connect to it with a password 12345678.

Being connected to that network you will see some more diagnostic messages until it shows the notice about “FinalStage” network, looking like this

*** IMPORTANT! *** ** AFTER the first download is COMPLETE, with in a minute or so you should connect to the new SSID "FinalStage" to finish the process. ** ONLY disconnect when the new "FinalStage" SSID is visible as an available WiFi network.

At this point you need to look for a network with a SSID “FinalStage” and connect to it. After a while it will continue with diagnostic messages (I edited them for brevity)

This server should automatically be allocated the IP address: 192.168.4.2.
If you have successfully connected to "FinalStage" and this is not the IP Address you were allocated, please ensure no other device has connected, and reboot your Sonoff.
............................Current IPs: []
.Sending file: /ota/image_arduino.bin
.Current IPs: ['192.168.4.2']
The "FinalStage" SSID will disappear when the device has been fully flashed and image_arduino.bin has been installed.
If there is no "Sending file: /ota/image_arduino.bin" log entry, ensure all firewalls have been COMPLETELY disabled on your system.
......200 GET /ota/image_arduino.bin (192.168.4.1) 17943.19ms

After a while the “FinalStage” network will disconnect you and dissappear. It means it has flashed completly and you need to look for a WiFi name starting with sonoff- followed by 4 digits. After connecting to it you need to point your browser at http://192.168.4.1/ and finish the setup. This step is required because the Sonoff won’t remember the settings you provided at the beggining of the whole process.