I first used a physical VoIP phone when I was living in London, in 2003. It was made by Grandstream, was corded, and registered to a SIP provider in Australia (Simtex, whom I think on longer exist).
It was rock solid. Family and friends in Australia would call our local Perth telephone number, and we’d pick up the ringing phone in London. Calls were untimed, no B-party charging, and calls could last for hours without fear of the cost.
The flexibility of voice over internet was fantastic. At work, I had hard phones in colo cages and office spaces from San Francisco, to New York, Hamburg and London, avoiding international roaming charges completely.
The move to Siemens Gigaset
Sometime around 2008/2009, I swapped the Grandstream set for a Siemens Gigaset DECT wireless system: a VoIP base station, and a set of cordless handsets that used the familiar and reliable DECT protocol. The charging cradle for handsets only required power, meaning the base station could be conveniently stashed right beside the home router – typically with DSL where the phone line was terminated.
It was fantastic; multiple handsets, and the ability to host two simultaneous, independent (parallel) phone calls. In any household, not having to argue for who was hogging the phone, and missed inbound calls was awesome. And those two simultaneous calls were from either the same SIP registration or up to 6 SIP registrations.
Fast forward to 2022, and I still use the exact, same system, some 13 years later. I’ve added additional handsets. I’ve switched calling providers (twice). Yes, we have mobile phones, but sadly, being 8,140 meters from the Perth CBD is too far for my cell phone carrier (Singtel Optus) to have reliable indoor coverage. Yes, I could switch to Telstra, for 3x the price, and 1/3 the data allowance per month (but at least I’d get working mobile IPv6 then).
Gigaset has changed hands a few times, and while I’ve looked at many competitors over the years, I haven’t found any that have wrapped up the multi-DECT handset, answer phone, VOIP capability as well.
Yes, there are some rubbish features. I do not need my star sign displayed on the phone. Gigaset themselves as a SIP registrar has been unnecessary for me (YMMV).
And there are some milder frustrations; like each handset having its own address book, and a clunky Bluetooth sync & import to a laptop, or each handset having its own history of calls made. And, no IPv6 SIP registration.
What they haven’t done (that I have found) is make it clear which model is newer, and which models are superseded. Indeed, just discovering some of the models of base station in the domestic consumer range is difficult.
So the base station: which model is current? A Go Box 100? N 300? Comfort A IP flex? N300? Try finding the N300 on the gigset.com web site!
Can I easily compare base station capabilities/differneces without comparing the handsets – no!
I am looking for a base station that now supports IPv6, and possibly three simultaneous calls (two is good, but three would be better).
I keep returning to gigaset.com to hope they have improved the way they present their product line up, but alas, after 5 years or looking, it’s not got any better. It’s a great product, fantastic engineering, let down by confusing messaging and sales. At least put the release year in the tech spec so we can deduce what is older and what is newer, for both handsets and base station.
I feel that if Gigaset made their procurement of base station and handsets clearer they’d sell far more.
Like many, I ditched my out of date ISP provided home gateway a few years back, and about a year ago put in a Unifi Dream Machine Pro as a home gateway, and a pair of Unifi Access Points, implementing WiFi 5 (802.11ac), and able to take better advantage of the 1GB/50MB NBN connection I have.
Now, I find that WiFi 5 maxes out at around 400 MBit/sec, so I’ve been waiting for the newer WiFi 6 APs to launch – in particular the In-Wall access point. However, then comes along WiFi 6E, using the newly available 6 GHz spectrum, as well as dropping to the 5Gz and 2.4 GHz spectrum.
Then I went one further, and acquired a 16 TB HDD into the Unifi Dream Machine Pro and a single G4 Pro camera. This gave me around 3 months of continuous recording, and has helped pin point the exact time a neighbours car got lifted, as well as showing us the two times before that the perps drove past – all from the end of a 50 meter driveway an the other side of a closed vehicle gate.
I wanted to have an easy way my family can bring up the video feed on the TV, large enough to see detail from each camera.
But then the pandemic hit, and the global supply chain brought things to a standstill. Unifi, and their Australian distributors and retailers, have been sorely out of stock for a long time. Only one WiFi 6E product has launched from Ubiquiti thus far, and like most of their products, immediately sold out on their US store and hasn’t made it to Aus yet. Even the UDM Pro Special Edition hasn’t surfaced either in stock in the US, or from the Australian Distributors.
So it was with some glee I found just 5 of the Unifi Viewport devices had made it to Australia last week, the first time I’d seen stock in a year (I could have missed it). So I pounced on it, and today I unboxed it.
Unifi Protect ViewportContents and box of the Unifi Protect Viewport
The device shipped with an HDMI cable, some screws and a wall mount, and a small slip of instructions.
At one end of the device is a standard Ethernet port, th eother end has an Ethernet-out port, and an HDMI-out port. That’s handy if you already have a device that’s on Ethernet, like your TV itself, without running another patch back to your switch.
The actual Viewport itself was larger than I had expected, as shown when I hold it in my hand here:
Unifi Protect Viewport
I plugged it into a patch to a POE port on my Switch-8, and immediately it powered up, took a DHCP lease, and was shown as pending for adoption into my network.
The adoption took a moment, then a firware update and reboot, and then it automatically connected and started showing the default layout of cameras from Unifi Protect.
There were no visible lights to indicate the unit was powered on. Meanwhile, the device showed up in the console, with the following settings:
Unifi Protect settings
As you can see from the above, the “Select a Live View” comes from the Protect web app. I created a second Live view configured for four cameras, dragged the one camera I g=do have to one of the quadrants, and then could update the Viewport to instantly show the alternate 4 quadrant view.
The end result, on an 80″ TV looks like this:
Viewport displaying on a TV
I left the unit streaming to the TV for several hours, and it didn’t miss a beat. I could feel a little warmth from the Viewport, but not enough that I would be alarmed.
If I were running a larger security setup, I could imagine having several large TVs each with their own Viewport, but showing different Live Views (with one showing just he primary camera of interest).
There’s no administrative control that I’ve seen on the Viewport itself. You cant change or select cameras, you cant shuttle/jog the stream forwards or backwards. It seems to do one thing – stream current feeds – and do it reliably (thus far).
The video image was crisp and clear (the above image was when it had changed to night mode). The time stamp in the stop left corner appeared to roll forward smoothly. I couldn’t measure the frame rate, but it seemed pretty good – perhaps 20 fps, maybe 25 fps.
For those not familiar, SSH is the Secure Shell, an encrypted login system that has been in use for over 25 years. It replaced unencrypted Telnet for remote (text) terminal connections used to access (and administer) systems over remote networks.
Authentication for SSH can be done in multiple ways: simple passwords (not recommended), SSH Keys, and even MFA.
SSH keys is perhaps one of the most common ways; its simple, free, and relatively easy to understand. It uses asymmetric key pairs, consisting of a Private key, and a Public Key.
Understandably, the Private key is kept private, only on your local system perhaps, and the Public key which is openly distributed to any system that wishes to give you access.
For a long time, the Key algorithm used here was the RSA algorithm, and keys had a particular size (length) measured in bits. In the 1990s, 128 bits was considered enough,but more recently, 2048 bits and beyond has been used. The length of the key was one factor to the complexity of guessing the correct combination: fewer bits means smaller numbers. However, the RSA algorithm becomes quiet slow when key sizes start to get quite large, and people (and systems) start to notice a few seconds of very busy CPU when trying to connect across the network.
Luckily, a replacement key algorithm has been around for some time, leveraging Elliptical Curves. This article gives some overview of the Edwards Curve Elliptical Curve for creating the public and private key.
What we see is keys that are smaller compared to RSA keys of similar cryptographic strength, but more importantly, the CPU load is not as high.
OpenSSH and Putty have supported Edwards curves for some time (as at 2022), and several years ago, I requested support from AWS for the EC2 environment. Today, that suggestion/wish-list item has come to fruition with this:
AWS has been one of the last places I was still using RSA based keys, so now I can start planning their total removal.
Clearly generating a new ED25519 key is the first step. PuttyGen can do this, as can ssh-keygen. Save the key, and make sure you grab a copy of the OpenSSH format of the key (a single line that starts with ssh-ed25519 and is followed by a string representing the key, and optionally a space and comment at the end). I would recommend having the Comment include the person name, year and possibly even the key type, so that you can identify which key for which individual.
You can publish the Public Key to systems that will accept this key – and this can be done in parallel to the existing key still being in place. The public key has no problem with being shared and advertised publicly – its in the name. The worse thing that someone can do with your public key is give you access to their system. In Linux systems, this is typically by adding a line to the ~/.ssh/authorized_keys file (note: US spelling); just add a new line starting with “ssh-ed25519”. From this point, these systems will trust the key.
Next you can test access using this key for the people (or systems) that will need access. Ensure you only give the key to those systems or people that should use it. Eg, yourself. When you sign in, look for evidence that shows the new key was used. For example, the Comment on the key (see point 1 above) may be displayed, such as:
Lastly you can remove the older key being trusted for remote access from those systems. For your first system, you may one to leave one SSH session connected, remove the older SSH key from the Authorized Keys file, and then initiate a second new connection to ensure you still have access.
Now that we have familiarity with this, we need to look at places where the older key may be used.
In the AWS environment, SSH Public Keys are stored in the Amazon EC2 environment for provision to new EC2 instances (hosts). This may be referenced and deployed during instance start time; but it can also be referenced as part of a Launch Configuration (LC) or Launch Template (LT). These LCs and LTs will need to be updated, so that any subsequent EC2 launches are provisioned with the new key. Ideally you have these defined in a CloudFormation Template; hence adjusting this template and updating the stack is necessary; this will likely trigger a replacement of the current instances, so schedule this operation accordingly (and test in lower environments first).
There’s no sudden emergency for this switch; it is part of the continual sunrise and sunset of technologies, and address the technical debt in a systematic and continual way, just as you would migrate in AWS from GP2 to GP3 SSD EBS volumes, from one EC2 instance family to the next, from the Instance MetaData v1 to v2, and or from IPv4 to dual-stack IPv6.
As we round out another year, here’s a perspective on where we are up to on the IPv6 transition that has been going for 20 years, but now gathering momentum.
With around 33% of traffic being IPv6, its reasonably significant on a global level. The country-by-country map shows the US on 47%, Germany at 55%, UK on 34%, and Australia at 27%. We’re fast reaching the tipping point where IPv6 is the expected traffic type.
Clearly, the biggest factor is going to be the destination services that people (and systems) want to get to offering IPv6, and then the client (users) having IPv6 available to then send that traffic.
In Australia, the dominant cell phone provider, Telstra, has been using IPv6 for its consumer business since 2016. Now, five and a half years later, we’re still waiting for Optus (Singtel), Vodafone, and many more to pick up.
For ISPs: iiNet continues to show no innovation since the engineering talent departed in the post TPG acquisition. Aussie Broadband (whom I churned to) still call their IPv6 offering a beta, but its been very stable for me now for… years. Perhaps time to take the beta label off?
IPv6 on the Cloud
AWS made many IPv6 announcements through pre-Invent and re:invent (the annual AWS Cloud summit/conference).
Many key services already had 1st class dual-stack support for IPv6 along side IPv4. Key amongst this is Route53, serving DNS responses on either transport layer.
Now we’re seeing IPv6 only capability, as IPv4 starts to look more and more like a legacy transport protocol.
With that perspective, its a shame to see how many of the trusted brands back on the commercial ISP and Mobile carrier market just aren’t up to speed. No engineering investment to really modernise.
AWS VPC: IPv6 only subnets
When defining a VPC, one of the most durable configurations is the IPv4 CIDR range assigned. Once allocated, its set. You can add one more IPv4 allocation if you must, in a separate range, with all kinds of complications.
Since 2016 AWS has permitted an IPv6 network allocation to dual-stack subnets in a VPC, and to allocate to an EC2 instance. Traffic within the VPC was still primarily IPv4 – for services like RDS (databases), etc. Slowly we’ve seen Load Balancers become dual-stack, which is one of the most useful pieces for building customer/internet facing dual-stack, but then fall back to IPv4 on the private side of the load balancer.
The IPv6 in subnets had an interesting feature. Initially the address allocation was only, but now you can BYO, however, the subnet size is always /56, some 18 quintillion IP addresses. Compare with IPv4, where you can make subnets from /28 (14 IPs) to /16 (65,533 IPs).
Two common challenges used to appear with IPv4 subnets: one, you ran out of addresses, especially in public subnets with multiple ALBs of variable traffic pattern peaks, and two: a new Availability Zone launch, and the contiguous address space needs some consistency to meet with traditional on-premises (internal) firewalling.
Now, you only have the one consideration: contiguous address space, but you’re unlikely to exhaust a subnet’s IPv6 address allocation.
When ap-southeast-2 launched, I loved having two Availability zones, and in the traditional IPv4 address space, I would allocate a contiguous range for each purpose across AZs equally. For example:
Purpose
AZ A
AZ B
Supernet
Public
10.0.0.0/24
10.0.1.0/24
10.0.0.0/23
Apps (Private)
10.0.2.0/24
10.0.3.0/24
10.0.2.0/23
Databases (Private)
10.0.4.0/24
10.0.5.0/24
10.0.4.0/23
Contiguous address space across two AZs
From the above, you can see I can summarise up the two allocations for each purpose, the one in AZ A and AZ B, into one range.
Now for various points I wont dive into her, having TWO AZs is great, but having three is better. However, address space is binary and works in powers of two the purpose of subnetting and supernetting, so if I wanted to preserve contiguous address space, then I would have kept the 10.0.2.0/24 reserved for the AZ C, and then have a left over /24 (10.0.3.0/24) in order to make a larger supernet of 10.0.0.0/22 to cover all Public Subnets:
Purpose
AZ A
AZ B
AZ C
Reserved
Supernet
Public
10.0.0.0/24
10.0.1.0/24
10.0.2.0/24
10.0.3.0/24
10.0.0.0/22
Apps (Private)
10.0.4.0/24
10.0.5.0/24
10.0.6.0/24
10.0.7.0/24
10.0.4.0/22
Databases (Private)
10.0.8.0/24
10.0.9.0/24
10.0.10.0/24
10.0.11.0/24
10.0.8.0/22
Now we’ve split our address space four ways, preserved continuity to be able to subnet, but we had to reallocate the .2 and .3 ranges. What a pain.
Now, we can take VPC subnetting further; the next increment would be provisions for 8 AZs. That’s probably a stretch for most organisations, so 4 seems to be most common.
Now take an IPv6 lens over this, and the only thing you’re looking at keeping contiguous over the IPV6 range is the order of allocation, from :00:, through to :03: for the Public Subnets, and then :04: to :07: for apps, and :08: to :0b: (yes, its hex) for the Databases.
Going backwards from IPv6 toIPv4
Another release at reinvent for VPC was the support for DNS64, and NAT64 on NAT Gateway. On a subnet by subnet basis, you can have the DNS resolver return a specially crafted IPv6 address that actually has an IPv4 embedded in it; when used with NAT64,then NAT Gateway will bridge the traffic going outbound from IPv6 internally, back to IPv4 externally. Now you can adopt modern IPv6 internal topologies, but still reach back into the past for those integrations that haven’t gone dual-stack yet.
Of course, this would be far easier if, for all services you wanted to access, you already had the choice of IPv4 or IPv6. Which, as a service provider, you should be offering to your integration partners already.
Load Balancers end-to-end IPv6
Until recently, when traffic from the Internet hit an ALB or NLB over IPv6, it would drop down to IPv4 for the internal connection; however that changed during reinvent with end-to-end IPv6. Virtual Machines (EC2 instances) now see the originating IPv6 address in packets.
Summary
Its been 3 years since some of the critical state government projects I was working on went dual stack in AWS. I encourage my teams to present dual-stack external interfaces, and to prepare all customer environment for this switch over as part of their managed services and professional services deployments. Its not complicated, it doesn’t add any cost, and it can be a competitive advantage. Its another example of a sunrise and sunset of yet another digital standard, and it wont be the last.
There’s a great XKCD cartoon entitled Depencency that cuts to the heart of today’s software engineering world: developers (and in turn organisations) everywhere love the use of libraries to accelerate their development efforts, particularly if that library of code is free to use, and typically that’s Open Source Free.
The image speaks about large complex systems, critical to organisations, needing the unpaid, thankless contributors of these libraries but upon whom everything relies.
In the last week, we’ve seen Log4J, a Java logging utility, come under such focus due to a critical remote code execution bug that can see the server side triggered to make outbound requests. A vast amount of Java based solutions for the last 15+ years has dependencies on logging messages being implemented using this library.
Java is widely used, as Oracle corporation points out clearly:
3 Billion Devices Run Java – Oracle
There’s two sides to this: invalid requests coming in that should be handled with sensible data validation, and the resulting external requests that servers can be tricked into making.
Now I am not saying everyone should use their own logging library; that would be even more on fire. But we should stand ready to update these things rapidly, and we should help with either code contributions or financial donations (or both) to help improve this for the common good.
Untrusted Data Validation
Validating untrusted data sources is critical. The content of a local configuration file is vastly different from the query from the Internet. I’ve often joked about setting my Browser user-agent string to the EICAR test file content, used as a dummy value to trigger Antivirus software to match on this text.
In this case, we have remote attackers stuffing custom generated data strings in HTTP requests (and email and other sources that accept external traffic/data) to try and trick the Log4j library into processing and interpreting this data instead of just writing it to a log file.
Web servers always accept data from the Internet, and Web Application Firewalls can offer some protection, but in this case, the actual “string to check” can be escaped, making it harder to write simple rules that match.
Restricting outbound traffic
An attacker is often trying to get a better access into the systems they target; their initial foothold may be tentative. In this example, the ability to trick a target server to fetch additional data (payload) from an external service is key. There’s two main types of external data egress: direct, and indirect.
In the direct model, your server, which you installed and thus trust, may be running behind a firewall, but have you checked if you have restrictions on what it can fetch directly from the Internet?
In AWS, the default AWS Security Group for egress is to permit all traffic; this is a terrible idea, but is the element of least surprise for those new to the AWS VPC environment. It is strongly recommended that you pair this down for all applications, to end up with only the minimum network access you need, even when behind a (managed) NAT Gateway or routing rules, and even if you think your server only has internal network access.
I wrote a whitepaper on this topic for Modis in 2019 about Lateral Movement within the AWS VPC, and some of the concepts there are relevant now.
Your VPC-deployed virtual machine instance probably only needs to initiate connections to S3 on 443, and its database server on the local CIDR (address) range. For example, if you have three Subnets for databases:
10.0.0.0/26 (Databases in AZ-A)
10.0.0.64/26 (Databases in AZ-B)
10.0.0.128/26 (Databases in AZ-C)
10.0.0.196/26 (reserved for future expansion of Databases in a yet-to-be announced AZ-D)
… and are running MySQL (eg, RDS MySQL) in those AZs, then you probably want an egress rule on your Application Server/instance of 10.0.0.0/24:3306. (Note, be ready for making this all IPv6 in future). However, your inbound rule on the same group is probably referential to your managed Load Balancer, on port 443.
What about DNS and Time Sync?
If you have cut down your egress to just the two rules (HTTPS for S3 to bootstrap, CFN-init to signal ASG creation, and database traffic), what about things like DNS and Time. These are typically UDP based (ports 53, 123).
Indeed, the typical DNS firewall used for NTP, when syncing from external time services, is *:123 inbound and *:123 outbound. Ouch.
AWS Time Sync Service
The good news is you do not need to permit this in your security group rules IF you are using the AWS VPC provided Time Sync service and DNS Resolvers. These are available over the link-local network, and security groups do not restrict this traffic; hence can be left closed for UDP port 123.
This time service is also scalable; you don’t need to have thousands of hosts pointing at one or two of your own NTP servers; the AWS Time Sync service runs from the hyper-visor, so as you randomly add instances, you have more physical nodes (droplets) involved in provisioning this, so your time services scale.
Managed & Scalable DNS Resolution
DNS can be used for data exfiltration. If you run your own DNS resolver (eg, on a Windows Domain Controller(s) or Linux host(s) and set your DHCP to hand this resolver address to clients, then you may be at risk of not even seeing this happen. This is an indirect way of being exploited; your end server may not have access to egress to the Internet, but it can egress to your DNS resolver to… well, look up addresses. If you do run your own DNS server, you should be looking at the log of what is being looked up, and managing the process to match this against a threat list, and issuing warnings of potential compromise.
Managed DNS Security Checks: Guard Duty
If that’s too much effort, then there is a managed solution for this: AWS Guard Duty and the VPC-provided DNS resolver. In order for Guard Duty to inspect and warn on this traffic, you must be sending DNS queries via the VPC resolver. Turning on Guard Duty while not sending DNS traffic through the AWS provided service – for example, running your own root-resolving DNS server, means the warnings from Guard Duty will probably never trigger.
By contrast, having your self-managed resolver (eg Active Directory server) use the VPC resolver means that it is the one that will be reported upon when any other instance uses it as a resolver with a risky lookup! I’m sue that will be a mild panic.
Managed DNS Proactive Blocking: DNS Firewall
Going beyond simply retrospectively telling you that traffic happened is pro-actively blocking DNS traffic. Route53 DNS Firewall was introduced in 2021, using managed block lists for malicious domains. This gives some level of protection that clients (instances) will get a failed DNS lookup when trying to resolve these bad domains.
My Recommendations
So here’s the approach I tell my teams when using VPCs:
Always use the link-local time Sync service; it scales, and reduces SPOFs and bad firewall rules.
Always use the link-local DNS resolver; it scales. use a Resolver Rule if you need to then hook the DNS traffic up to your own DNS server (AD Domain Controller).
Turn on Guard Duty, set up notifications of the Findings it generates.
Turn on DNS Firewall to actively BLOCK DNS lookups for bad domains.
Turn on your own Route53 query logs for yourself, with some retention period (90 days?)
For inbound Web traffic, use a managed Web Application Firewall with managed rules, and/or scope your application to the country you’re intending to serve traffic to. In particular, block access to administrative URL paths that don’t come from trusted source ranges.
Leverage any additional managed services that you can, so you minimise the hand-crafted solutions in your application.
Template your workload, and implement updates from template automation; no local changes. Deploy changes rapidly using DevOps principles. Socialiase with your team/management the importance of full stack maintenance and least privilege access — including at the network layer ingress and egress — and schedule and prioritise time to include technical debt in each iteration, including the updating of every third party library in your app.
If you have a DevOps pipeline with something like SonarQube or Whitesource, have it report on dependencies (libraries), and get reports on how out-of-date those libraries are, and/or if those out of date versions have known CVEs against them. Google Lighthouse (in the browser) does a great job of his for JavaScript web frameworks.
For this exploit you need to go widerthat what you run in cloud: your company printer (MFP), network security cameras, VoIP phones, UPS units, air-conditioners, Smart Hubs, TVs, Home Internet Gateways, and other devices will probably have an update. Your games console, and the games on it (this started from an update in Minecraft to address this and has… escalated quickly!). Even the physical on–prem firewalls and virtual appliances themselves – but ensure you don’t just do firewalls and ignore the larger landscape of equipment you have.