News Media: Generative AI – can you trust the news?

A news story broke in the last week about an image shown of a politician in Australia, shown in a still as part of a journalist introducing a story. The story here is irrelevant, it is the image used to start this:

Now it turns out the image of Ms Purcell (the politician in the photo above and below) was extended using Adobe Photoshop and “Generative Fill”, using a smaller image than what was on the right above:

The Australian Broadcasting Corporation (ABC) has a story on this which is worth a read.

They look at it as ethics. And Ms Purcell called out sexism in social media. All of which may or may not be true. Adobe has naturally defended their tool, which seems reasonable to me.

Indeed, the appeal of Generative AI tooling is clear from Adobe’s own advertising:

The fact remains that the generative fill tool (and perhaps the digital artist using the tool) and the editor felt that this did not change the story but did fill the purpose of the lead into their piece.

However, I want to raise the question of what impact the change made to the viewer, who saw this image, and perhaps jumped to some conclusions based upon what they saw. Pretty quickly you’re coming to the realisation that the images you see from a news media organisation is no 100% factual. It is longer news, but entertainment. This small edit, may alter the viewers perception and pre-disposition to a topic.

People have historically trusted news organisations to show us facts, and inform us. We have inherently taken Social media as being somewhat less trusted. We hear about journalistic integrity. And while this is minor, its that it has been detected, highlighted, and confirmed that is slightly alarming.

DeepFake voice and video has been around for some time now, and we always want to ensure that data we reply upon come from credible sources. Perhaps the use of Generative AI in news media, newspapers, should be frowned upon or forbidden.

Media (entertainment) organisations that wish to sway the political discourse may of course be doing much more of this. Those are the media outlets with a less than stellar reputation, where the astute consumer will understand that what’s presented may be a version of the truth that is enhanced for various purposes.

If it is for satire, then that’s fine (if the platform or source is understood to be satirical).

If it is to undermine society and influence elections and politics, and impact society for personal gain, that’s not what I would like in my society.

Perhaps this is an innocent mistake, with no malice or forethought about how such content fill may change perceptions.

Perhaps some training for journalists (and their supporting image editors), and a statement from these organisation on their use of generated content?

The Hype Cycle: Gen AI

There’s a word in “Gen AI” that I want you to concentrate on: generative.

Next I’d like you to think about areas in your organisation where you can safely generate (and review what’s generated). This is probably not safe in your invoicing, product testing & QA, CRM records, etc. Those systems all use something else: facts. Facts cannot be generated.

The inordinate amount of hype around GenAI, including finance people on the Australian Broadcast Corporation (ABC Australia) talking about the demise of one company because “Gen AI can now do what their product tech was” is well out of proportion.

Have you noticed how the hyper about the Metaverse has dropped off?

And blockchain?

These are all great technologies, but lets face it, in the majority of cases they have very little to do with the core business of what your organisation does – unless you’re in the arts and creative industries.

There are fantastic demos from Adobe where they generate additional image content. There are prose, poems and fiction being generated. All of which is circling around and creating content from existing examples. It may look fresh, but its generated from existing input, but without human intuition, insight, or intelligence.

We see “director of GenAI” as a job title, but I think that will last about as long as “director of blockchain“.

These technologies do have a place – I called out Adobe above, and you may use their software – but remember, the advantage is to Adobe’s product, and probably not yours. You likely don’t work for Adobe (I’m not picking on them, but the data is in: most of the world does not work there). You’re likely to be a few layers down in the value chain. You’ll benefit from it, but I don’t think you’re going to be training your own LLMs. Someone else will, and they’ll charge you for it or figure out a way to recoup the expense by some method.

But remember, its generated. I can use GenAI to give me tonight’s lottery numbers. While plausible, they are unlikely to be correct.

Gartner’s Magic Quadrant for Hyperscaler Cloud Providers – 2023

The traditional Gartner tea-leaves view of hyperscalar cloud providers was renamed in 2023, from “Magic Quadrant for Cloud Infrastructure and Platform Services” to “Magic Quadrant for Strategic Cloud Platform Services”. But the players are all the same as in 2022:

In grey are the 2022 results, and in blue is the 2023.

  • Alibaba slips a whole quadrant, with a large drop in its completeness of vision
  • Oracle rises a whole quadrant to join the leaders, but only just
  • Tencent dropped its ability to execute.
  • IBM picked up substantially, but still a niche player (but they are also the worlds largest AWS Consulting Services Partner when counted by AWS certification numbers (21,207), followed closely by Tata Consulting Services (21,200))
  • Huawei regressed its completeness of vision, but marginally improved its ability to execute
  • Google rose to now start approaching AWS and Microsoft.
  • Microsoft improved its ability to execute
  • AWS dropped its “completeness of vision” slightly

There’s only really three legitimate global hyperscalar contenders: AWS, Microsoft and Google, in that order. The rest are focused and founded within the great firewall of China, or IBM.

WiFi 802.11AX (WiFi 6e) on 6GHz

Over the 2023 end of year, I moved home. A long process of three years of construction and I have a shiny new pad. And of course back in 202, I had done diagrams for the cabling and access points that I would require. I had used the Ubiquity Networks Design Centre, design.ui.com, to lay out the plan: two floors of primary residential, with WiFi access points around the outer walls.

I wanted the flexibility of also having some ethernet patches, so it made sense (to me) to use the Unifi In Wall access points, as the provide a 4 port switch (and one of those ports is pass-through PoE). I also anted to ensure good WiFi coverage from the far rear of the property by a proposed swimming pool: no excusing for whinging about coverage for iPad users when lounging by that end of the property.

In the period that passed, Ubiquiti released the U6-IW device to replace the IW-HD, and the Enterprise U6-IW which ups the game to include a radio on the 6GHz spectrum in addition tot he existing 2.4GHz and 5GHz.

This is now all in place. The topology now looks like this:

The core of this is still my original Unifi Dream machine, and a 24 port Enterprise switch: you’ll note that most ports are actually used. Nearly everything plugged directly to the core switch is PoE: security cameras and In Wall access points.

Nice, so lets fire up 6HGz (form newer devices) and see what we can do. I went to edit the existing Wireless network definition, and ticked the “6Hz” option. At this time, the WiFi security instantly changed from “WPA2/WPA3” to only WPA3.

This instantly dropped half my devices off the network . Even the newest of home “wifi” appliances – dishwasher, garage doors, ovens, home security systems, doorbell intercoms can not handle WPA3. Indeed, lets look at these devices:

  • Miele oven (2023): WiFi 4
  • DeLonghi Coffee machine (2023): WiFi 4
  • Bosch Dishwasher (2023): WiFi 4
  • Fronius 10kW solar inverter (2023): WiFi 4
  • iRobot Roomba J7+ (2022): WiFi 5

These are all new devices, and yet the best Wifi they support means I’ll have to leave 802.11n enabled for the next decade or longer (until I replace them with something newer).

So now I have two WiFi networks: the primary ESSID that is 6GHz and WPA3, and a secondary “compatible” ESSID that does not permit 6GHz, but does support WPA2.

These device manufacturers are only listing “WiFi” in their sales messaging. None of them are going the next level of calling out which WiFi version they support, and what version of WPA. It’s time that manufacturers catch up on this, and enable consumers to select products that are more secure, and not products that force deprecated protocols to be used.

Cloud Optimisation all the rage in 2023

I have been tinkering with AWS since 2008, and delivering AWS Cloud solutions since 2010, and in this time, I’ve seen many cloud trends come along, and messaging subtly change from all of the hyperscale IaaS and PaaS providers.

This year, we’re seeing more talk about Optimisation. In the 2023 earnings calls, we heard:

  • Microsoft: “Customers continued to exercise some caution as optimization … trends … continued”
  • Google: “slower growth of consumption as customers optimized GCP costs reflecting the macro backdrop”
  • AWS: “Customers continue to evaluate ways to optimize their cloud spending in response to these tough economic conditions.”

We have also seen the messaging around Migration to cloud evolve to “Migrate & Modernise”. This is putting pressure on the laziest, simplest, and least effective of the “Seven R’s of cloud migration”, being “Rehost” and “Relocate”.

Why is this?

Rehost takes the existing spaghetti of installed software, and runs it in exactly the same way on a hyperscale cloud providers concept of a virtual machine. And in true least effort approach, if you had a virtual machine with 64 GB of RAM previously (even if only 20% utilised), then you would select the closest match in a simple rehost/reinstall.

If you had 10 application servers on 24×7, then you still get 10 cloud virtual machines, 24×7, even if you only need that peak capacity for one day of the year.

And even more interesting is paying licensing fees for a virtualisation layer you don’t need to be paying for (and get a different experience when you do).

Not very efficient. Not very smart. But least effort.

So why do organisations do this?

That is easy: It’s complicated to do well. It takes time, experience, and wisdom.

This technology industry is full of individuals who think their job is to make nothing change. Historically, many IT service frameworks were designed around slowing the rate of change, diametrically opposed to DevOps. And organisations that see the cost of technology operations, not the value of it.

Cost has been driven down so much that the talented individuals have left, and only those who remain (who perhaps couldn’t geta job elsewhere) are keeping the lights on. They don’t have the experience, knowledge or wisdom to know what good looks like, they just know what has, for the last 30 years, been “stable”.

And while some engineers are keen to learn new things, and use that, many senior stakeholders have in their mind “what’s the least we can do right now”.

So when it comes to a cloud migration, they see “least change” as “least risky”, not “more costly”.

In 2023 at the AWS Partner summit, AWS representatives quoted that “only some 15% of workloads that will move to the cloud, have now done so”. They furthered that that 15% now requires some level of modernisation.

Again, 10 years ago, many organisation moved large workloads in lift-and-shift method to the same number of instances of the day. Perhaps the m3.xlarge in AWS or similar elsewhere. And then the people who knew what Cloud was, departed the scene, leaving an under experienced set of individuals to keep the lights on and change nothing.

I once did a review of an 3 party Analytics package running on 4 m3.xlarge instances; they had acquired a 3 year Reservation, and kept the instances running 24×7. They didn’t use them all the time. They had never done any changes (not even OS patching). They were running a RedHat 7.x OS.

Two cycles (yes, 2) of newer instance families had been released in that period: the m4, and then m5.  Because of the Linux kernel in that version of RedHat, they were prevented from moving these virtual machines to a newer AWS EC2 instance family due to lack of kernel support.

In reality, they were fortunate, as they had run a version of RedHat that finally (after 20+ years) supported in-place upgrades. The easiest path ahead was to snapshot (or make an AMI of each individual host), then in-place upgrade to the latest RedHat release, and then during a shutdown of the instance (stop), adjust the instance family to the m5 equivalent. This alone would save them something like 20% of cost. They could then do a Reservation (my recommendation was for one year, as things change…) and they would have ended up at over 80% cost reduction – and faster performance.

Of course, far cleaner is to understand the installation of the 3rd party analytics software, the way it clusters, and replace each node with a clean OS, instead of keeping the craft from bygone installations.

And beyond that would have been the option to not Reserve any instances, but just turn them off when not in use.

But they hadn’t.

Poor practices in basic maintenance and continual patching and upgrades meant they had a lot of work to do now, rather than already be up to date.

I’ve seen a number of Partners in the Cloud ecosystem get recognised and rewarded for the sheer volume of migrations they have done. And despite these being naive lift-and-shift, and looking good for the first 6 months, the bill shock starts to set in. And over time, the lack of maintenance becomes an issue.

Even with the “Re-platform” pattern of migration (adopting some cloud managed services in your tech stack), and adopting perhaps something like a Managed Database service, can have its catches. Selecting a managed version of perhaps Postgres 9.5 back 5 years ago, would have put you in trouble, when 9.6, 10, 11, 12, 13, 14 came out, because as newer versions came out, older versions were deprecated, and eventually unavailable. If you weren’t adequately addressing technical debt and maintenance tasks as part of your standard operations, then you’re looking at possible operational trouble.

One clever tactic to address this is to have engineers who are familiar with more contemporary configurations, experts in the field (whom you can spot by the 6+ concurrent AWS Certification they hold) who can perform a Well-Architected Framework Review. This is typically short 1 week engagement for a professional consulting company, borrowing experienced talent into your business that can give you clear addressable mitigations covering many facets, including cost.

If you previously did a Lift & Shift, and found it expensive, perhaps your organisations ability to bring expertise to bear on the work is missing. If you don’t know where to start, then start asking for expertise.

So why are all the cloud providers talking about Optimisation? Because they know their customers can do better, on the same cloud provider. And they would rather have a customer optimise their spend and stay, rather than now migrate & modernise to a competitor, and start the training and upskilling process again.