David Buttle, Author at Press Gazette https://pressgazette.co.uk/author/davidbuttle/ The Future of Media Wed, 16 Oct 2024 15:36:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://pressgazette.co.uk/wp-content/uploads/sites/7/2022/09/cropped-Press-Gazette_favicon-32x32.jpg David Buttle, Author at Press Gazette https://pressgazette.co.uk/author/davidbuttle/ 32 32 Why Microsoft Copilot Daily launch is ‘moment of significance’ for news industry https://pressgazette.co.uk/comment-analysis/why-microsoft-copilot-daily-launch-is-moment-of-significance-for-news-industry/ Wed, 16 Oct 2024 15:09:34 +0000 https://pressgazette.co.uk/?p=233174 Image shows part of a phone screen which is on the website for Microsoft's Copilot. It states: "Your everyday AI companion." The phone is resting on a laptop keyboard.

Copilot Daily will give users an AI-generated summary of news and weather.

The post Why Microsoft Copilot Daily launch is ‘moment of significance’ for news industry appeared first on Press Gazette.

]]>
Image shows part of a phone screen which is on the website for Microsoft's Copilot. It states: "Your everyday AI companion." The phone is resting on a laptop keyboard.

At the start of the month, in a broader announcement about its latest AI features, Microsoft quietly unveiled Copilot Daily which “helps you kick off your morning with a summary of news and weather, all read in your favourite Copilot Voice”.

It goes on to explain how “Copilot Daily will only pull from authorised content sources… such as Reuters, Axel Springer, Hearst Magazines, USA Today Network and Financial Times“. Publishers are being paid – although on what terms we do not know – for content when it is used in Copilot Daily.

This low-key announcement represents a real moment of significance for the news industry. For some time it has been clear that generative AI can be used to create highly-personalised information services; the technology is really good at selecting and synthesising content from a large dataset, based on a set of parameters. But this is the first time these capabilities have been deployed in a news context by a major AI developer, with financials attached for the content creators.

This matters for three reasons. Firstly, services like this give rise to a new set of intermediation risks. Secondly, these risks bring to the fore licensing decisions for publishers which are strategically consequential in the era of AI disruption. Finally, it adds to the pressure on Google’s faltering relationship with publishers, particularly around AI Overviews. Let’s examine each of these in turn.

To date, AI intermediation has been manifest in the form of disruption to referrals from search. This has arisen from two mechanisms:

Firstly, Google incorporating AI Overviews into its search product, thus reducing or entirely eliminating the need for users to visit a ‘destination’ site.

Secondly, via consumers adopting AI tools in place of search. In a new survey The Information has found that more than three-quarters (77%) of its readers are using generative AI tools in place of search and over a quarter report to be doing so in a majority of cases. Whilst this is not a representative sample, this tech-native audience is highly-likely to be a leading indicator of future consumer behaviour. The underlying reason is simple: these tools are just better for certain information retrieval tasks.

[Read more: AI revolution for news publishers is only getting started]

As a result of these mechanisms publishers should expect a structurally-declining flow of referrals from Google. But with Copilot Daily, Microsoft is creating a product which, in some circumstances and for some users, will disrupt an entirely different kind of traffic: that which arrives directly.

Copilot Daily appears to be fairly rudimentary at the moment. But imagine how powerful it could be if it understood what was in your diary that day, in your inbox, on your to-do list, who your favourite columnists are, which media outlets you subscribe to and what specific news stories you’re interested in.

Instead of waking up and checking the FT or New York Times homepage or app, I can bark a command and Copilot Daily – or an AI-powered Apple News product – will give me a highly personalised briefing for my day.

Now clearly this isn’t black and white: even if all those factors fed into the selection of content, many users will still use their news apps and publisher homepages as consumers have always seen value in the editor’s curated view of what is important. But, unlike the effects from AI Overviews and use of e.g. ChatGPT or Perplexity instead of Google which are more likely to impact search referrals, these engagement losses will fall on direct traffic to publisher properties, the growth of which has been a key strategic priority and size of which is perceived to be a crucial measure of resilience.

As a consequence of this development, publishers need to consider a new set of strategic trade-offs. It seems inevitable that, in future, a greater proportion of news publisher revenue will need to come from licensing content as an input to a user-facing service. But in the context of that inevitability – underpinned by the advent of this new technology and its utility in this setting – what does a good deal look like today? And how should any outlet balance the brand and reach upsides with the substitutional downsides.

Beneath these broad questions, sit some granular and fiddly ones. For example, what content can be summarised – everything or just a subset? How long can those summaries be? Should access be provided in real-time or should there be a delay? Should we push to insert a termination clause? Or category exclusivity? How do we want our brands to be represented? Crucially, what is a fair price?

It’s very hard to get these right now, but publishers should be thinking about them and playing forward the likely destination for this technology. Finding the right balance between a loyal audience monetised through engagement on owned-and-operated platforms and a peripheral audience monetised by licensing content to an intermediary service-provider, will be, in my view, the central strategic challenge of the AI era of news publishing.

Finally, this is bad news for Google. Regulators – and publishers themselves (particularly their legal teams) – will all be asking why, if Microsoft is paying to summarise content, shouldn’t Google be too? The only answer being that, thanks to the monopoly position it holds in online search and the consequent imbalance in bargaining power, publishers cannot demand and secure payment.

Regulatory enforcement and judicial proceedings do not move quickly. But over the medium term it’s looking increasingly hard for the Mountain View giant to maintain the position that it will not pay for the use of content to inform AI Overviews. Possibly even general search.

As we see AI deployed in these kinds of settings – and licensing markets emerge to facilitate them – more profound questions about the future of search follow: Will conventional, general search become focused on commercial queries (where the value exchange is clearer) and a new layer of services emerge, built on licensed content, as the main access point for news and broader informational queries?

Predicting the future is hard (and fraught with the capacity to deliver embarrassment) but this certainly feels like the direction of travel.

The post Why Microsoft Copilot Daily launch is ‘moment of significance’ for news industry appeared first on Press Gazette.

]]>
Disappointment for publishers as Artificial Intelligence Bill missing from King’s Speech https://pressgazette.co.uk/comment-analysis/disappointment-for-publishers-as-artificial-intelligence-bill-missing-from-kings-speech/ Thu, 18 Jul 2024 14:45:23 +0000 https://pressgazette.co.uk/?p=230132 Keir Starmer standing and speaking at House of Commons despatch box with Angela Rayner sitting to his left and Rachel Reeves on his right

Copyright reform protecting rights-holders like publishers "urgently needed", David Buttle writes.

The post Disappointment for publishers as Artificial Intelligence Bill missing from King’s Speech appeared first on Press Gazette.

]]>
Keir Starmer standing and speaking at House of Commons despatch box with Angela Rayner sitting to his left and Rachel Reeves on his right

The Labour Party set out its legislative agenda in the King’s Speech this week. Through this it has softened its commitments to introduce legislation to regulate AI. This does not augur well for a rapid resolution to the AI/IP problem.

This King’s Speech was never expected to hold much for the news sector. The long-running legislative changes that mattered most were passed through wash-up at the end of the last parliament.

In the last days of the Conservative administration we finally saw the passage of the Digital Markets, Competition and Consumers Act. This will give force to a new regulatory regime – five years in the making – aimed at addressing competition issues in digital markets. And of course the repeal of Section 40, which could have required publishers to pay legal fees to complainants – regardless of the outcome of their complaint – unless they were signed up to a state-backed regulator.

But increasingly these feel like yesterday’s battles. With Google referrals dropping and search interdependence faltering, the frontline for publishers in their online operations is becoming AI.

Large language model chatbots have been trained on publisher archives, without authorisation or payment. In some instances they are being used by consumers instead of original content. If you’re looking for a travel itinerary or recipe inspiration, you’re going to get what you need faster by using AI instead of scouring multiple sites via search.

[Read more: Google AI Overviews breaks search giant’s grand bargain with publishers]

In the days prior to the King’s Speech it was trailed that we should be expecting a firm commitment to introduce an Artificial Intelligence Bill in the first year of the Starmer regime.

Ultimately this was weakened, with instead the King telling us that his government would “seek to establish the appropriate legislation” to regulate “the most powerful artificial intelligence models”.

Whilst perhaps tonally a slight advance of the previous administration’s, this suggests both a delay and a focus on the end-of-the-world harms (which has been a useful and effective means by which AI firms have distracting lawmakers), rather than the more prosaic issues surrounding copyright infringement.

Conservative government ‘made a mess’ of AI policy – what’s next?

The outgoing Conservative government made a mess of policy in this space. Back in summer 2022, out of nowhere, it announced the intention to create an extreme form of “text and data mining” copyright exception. This would have given free rein to anyone wanting to ingest copyright materials to, for example, train a large language model. This exception would have applied even if that model is being used commercially.

Outcry from the creative sectors followed. This was amplified when, a few months later, OpenAI released ChatGPT 3 which, it was clear, had been trained on a corpus of data including vast amounts of copyright materials.

Ultimately the government scrapped the exception in early 2023 and instead tasked the Intellectual Property Office with establishing a working group to try and resolve the issue through the creation of a voluntary “code of practice”. After months of predictably-fruitless work, this was also abandoned, with participants failing to agree on the most basic points (i.e. is a licence required for AI model training).

From that point until the government limped out of office, its policy was seemingly “masterly inactivity”. Court cases had started to be filed on both sides of the Atlantic and rather than weigh in to support rights-holders, instead it sat back and let the judicial system do its thing.

[Who’s suing AI and who’s signing: Publisher deals vs lawsuits with generative AI companies]

Rumours abound that ministers and officials were subject to intense lobbying from tech firms. Any use of the word “licensing” was subject to fierce opposition. Decisions about where to deploy AI investment were a powerful bargaining chip for those working under a pro-tech Prime Minister Sunak (note Labour’s manifesto also describes an industrial strategy which “supports the development of the Artificial Intelligence (AI) sector”).

But the court cases will take many years to play out as technical arguments surrounding the precise processes involved in training a LLM are scrutinised. The consensus in the UK is that the existing legal framework will, ultimately, offer the protections that rights-holders need. However much harm may be done in the interim.

Whilst an AI Bill may not have been the right vehicle to provide clarity on the application of IP law to AI, it would be a good means of ensuring transparency around training data and signalling the Government’s position on this issue.

The EU’s AI Act includes such a provision but the wording leaves substantial wriggle room for big tech (requiring only “sufficiently detailed summaries” of training data to be published). The UK could have set the global standard in this narrow space.

Clearly it is early days for the Labour administration. There are reasons to believe that its instincts are likely to be more supportive of rights-holders and the UK’s creative sector than the previous government. But giving rights-holders the control over the use of their IP, and confidence in the application of IP law, is urgently needed.

The post Disappointment for publishers as Artificial Intelligence Bill missing from King’s Speech appeared first on Press Gazette.

]]>
Google AI Overviews breaks search giant’s grand bargain with publishers https://pressgazette.co.uk/comment-analysis/google-ai-overviews-breaks-search-giants-grand-bargain-with-publishers/ Thu, 23 May 2024 09:00:37 +0000 https://pressgazette.co.uk/?p=227868 Google CEO Sundar Pichai speaking at the I/O Developers Conference in May 2024

Why Google has gone too far with AI Overviews, which takes content and advertising from publishers.

The post Google AI Overviews breaks search giant’s grand bargain with publishers appeared first on Press Gazette.

]]>
Google CEO Sundar Pichai speaking at the I/O Developers Conference in May 2024

Last week we saw another raft of artificial intelligence announcements from Google and OpenAI. The latter has brought (not) Scarlet Johansson’s voice to a real-time speech-based assistant, sitting on its new model ChatGPT 4o (‘four-o’). The former with an underwhelming developer conference at which it unveiled – perhaps for want of something more interesting to say – a faster roll-out of ‘AI Overviews’ (or the technology formerly known as ‘search generative experience’). All US-based English language users will now see this feature.

AI Overviews really matter for publishers. This development presents consumers with a fully formed, natural language answer to a search query. It sits above the blue links on the search engine results page and is expressly designed to negate the need to visit another website.

As Google’s promotional video claims: “we do the work, so you don’t have to”. The marketing film goes on to show us how we can ask for some “kid friendly” activities in Dallas and be off to “Hopdoggy Burger Bar” followed by Dallas Zoo without a second thought.

[The image below compares how niche search term “publisher conversion rates” results in an AI-written article summarising publisher content on Google’s search engine in the US (left) while in the UK it results in a series of links to publishers, including Press Gazette.]
AI-driven search on Google in the US delivers an AI-written summary of publisher content (left) whereas conventional search provides links to publishers (right). Picture: Press Gazette

Google’s ‘grand bargain’ with publishers is now broken

The issue here is that AI Overviews only work because content creators, including publishers, have given Google permission to crawl their sites in order that they are indexed for search. This is a violation of the already-flawed grand bargain sitting between Google and publishers.

In the case of conventional search, the bargain goes something like this. We give Google access to our intellectual property, allowing the tech giant to create its core product, and in exchange Google delivers traffic which can then be monetised through advertising, subscriptions, events etc.

Putting aside the imbalance in bargaining power that raises questions about whether this is a fair deal, there is at least an exchange.

AI Overviews breaks this because Google isn’t delivering its part of the bargain: referral traffic. To make matters worse, Google doesn’t let content creators opt out of AI Overviews without also opting out of its core search product.

In the developer notes for search it tells us: “AI Overviews offer a preview of a topic or query based on a variety of sources, including web sources. As such, they are subject to Search’s preview controls.”

The final twist of the knife – as if one were needed – is that it is now serving ads against AI Overviews.

Just to clearly set this out: Google is using the access to publisher content which outlets have no choice but to give it – thanks to the gatekeeper position it holds in connecting us with audiences – to create a service which directly substitutes the use of our product. And then it is monetising that service through advertising, likely at the further expense of content creators. All without authorisation or payment.

An injustice and an abuse of market power

As well as being an injustice this is surely an abuse of market power. The French competition authority ruled against this conduct earlier in the year. That Google has chosen to continue acting in this way (it has actually significantly expanded the type of behaviour condemned) signals the extent to which the folks at Mountain View are spooked by the risks AI entrants present to its search monopoly.

Yes, this is about OpenAI. But also, I suspect, bout rival LLM Perplexity, which is more directly targeting search. As the incumbent, Google will know that the regulatory and legal backlash will come harder, faster and will have greater reputational impacts. The immediate downside risks to its core business of not acting this way, it must have calculated, are greater.

So what can publishers do about this?

How publishers can protect themselves from Google AI threat

I have been commissioned to model the mid-term commercial risks to UK news publishers from the deployment and adoption of AI. Whilst the full report has not yet been released (watch this space), it provides both a projected scale of impact for different categories of news publisher (which can be further refined on an individual publisher level) and strategic cues on how publishers can best protect themselves.

Whilst the precise optimal response for each publisher will depend upon the characteristics of their business model and content offering, there are three broad recommendations which would apply universally:

Firstly, and obviously, reduce your reliance on Google. Build direct relationships with your readers. Build communities. Expand your newsletters. Grow your database.

Secondly, be known for creating content that matters. For some search queries the source of the information being sought is of real consequence. Think if you’re choosing a car or getting interview advice. AI search is going to have lower utility in those circumstances as users will want information from brands they know. If your publication is trusted to deliver content of that nature, you will be afforded some protection.

Finally, distinctiveness. The utility of chatbots to answer queries about current events depends on their access to real-time information from providers of content. For some topics there is a scarcity of such inputs. Can you provide that information? And better still, be known and trusted for doing so?

Then there’s the commentary driving discourse; AI summarisation cannot supplant reading, first-hand, that columnist around which national, local or sectoral debate centres. Be the provider of that.

The media industry urgently needs regulatory intervention to stop Google’s conduct. In the UK, that doesn’t seem a near-term prospect (Lucy Frazer’s limp announcement of the Government’s desire to create a ‘framework or policy’ doesn’t instil confidence and the upcoming election makes that appear even more irrelevant).

In the absence of this, publishers would do well to decouple from the search giant and focus with precision on serving their readers’ high-stakes information needs with distinctive content.

The post Google AI Overviews breaks search giant’s grand bargain with publishers appeared first on Press Gazette.

]]>
AI-driven search on Google in the US delivers an AI-written summary of publisher content (left) whereas conventional search provides links to publishers (right). Picture: Press Gazette AI-driven search on Google in the US delivers an AI-written summary of publisher content (left) whereas conventional search provides links to publishers (right). Picture: Press Gazette
Google’s fight in France and what it means for UK publishers https://pressgazette.co.uk/comment-analysis/google-france-what-means-uk-publishers/ Thu, 11 Apr 2024 08:24:53 +0000 https://pressgazette.co.uk/?p=226220 Google search: biggest news referrer of traffic

What can UK publishers learn from the French competition authority’s €250m fine imposed on Google.

The post Google’s fight in France and what it means for UK publishers appeared first on Press Gazette.

]]>
Google search: biggest news referrer of traffic

In March the French competition authority issued another ruling – and another hefty €250m (£213m) fine – in its long-running pursuit of Google. The judgement finds against both the search giant’s approach to publisher negotiations for the use of media content on its platforms, and the controls it’s giving news businesses over AI training.

Google’s decision to pay repeated fines instead of comply gives us reasons to think that UK publishers are soon going to be able to improve their terms with the Mountain View behemoth.

At the heart of the dispute between publishers and Google – both in France and globally – is search itself. Publishers claim that they are due payment for the use of their content on Google’s results pages. Their case is that it is only as a consequence of the imbalance in bargaining power that they’re unable to secure fair remuneration through negotiations.

Google’s counter-argument is that the traffic it delivers to publishers is more valuable than what it receives in return; that it’s unable to monetise news queries via ads, and that the freedom to link is a fundamental tenet of the internet and therefore there is no legal foundation for payments to be made.

Governments have been moved to intervene on behalf of – mostly domestically-owned (and politically influential) – news outlets. There are two means by which they have done so. How much publishers get paid hinges on which policy intervention has been deployed.

Two methods of attempting to level the playing field

The first policy option is via copyright law. This is the approach taken on the continent and the current basis for payments to French publishers.

The EU passed the Copyright Directive in 2019. This creates a new ‘ancillary’ copyright for press publishers, explicitly giving them legal basis to secure authorisation – and payment – for the use of their content by online platforms (although unhelpfully excludes the use of ‘very short extracts’, without providing a definition).

The second approach of governments is to tackle the power imbalance head-on through competition law, as we have seen in Australia and Canada. And, shortly, with the Digital Markets Unit regime here in the UK too. These interventions seek to level the playing field directly by introducing mandatory bargaining, backstopped by final offer arbitration.

So we have two different policy mechanisms seeking to facilitate publishers to get paid for the use of their content. Google’s response has been to establish a means to pay publishers which protects their core search product. Enter News Showcase.

Google News Showcase is a user-facing product. But very few people see it. It primarily exists as a neat contracting trick to facilitate payments to publishers for the use of their content whilst allowing Google to maintain the legal position that payments are not due for the use of news content in search results.

Payment disparities via Google News Showcase

Showcase is now live in 25 countries and expanding all the time. But, while Google has a single technical – and contracting – product to pay publishers, the amount paid in each country varies widely.

If payments are being made under the Copyright Directive then they’re not going to be very substantial (NB: The Showcase deals on offer in the UK at the moment broadly mirror those in the EU). Whereas, if payments are made under competition interventions and the threat of genuine negotiations for the use of content in search results, it’s a different story.

I’ll give an example. Google struck a Showcase deal with Le Monde, the venerable French national, worth a reported $1.3m (£1m) a year. To give a sense of scale and impact, Le Monde’s digital properties see around 120 million visits a month.

By contrast, in Australia Google signed a deal with Seven West Media – whose flagship news title is The West Australian, a regionally-focused daily – for AUD$21m (£15.6m) annually. The West Australian website receives around 2.5 million visits a month.

While the deals done with French publishers were under the Copyright Directive, this latest regulatory intervention is on the basis of antitrust infringements. The aim is clearly to secure deals for French publishers more like Seven West and less like Le Monde.

To achieve this the French competition authority is trying to force Google to negotiate in broadly the same way as the UK’s regime will do. That’s by mandating negotiations over search. And Google is repeatedly not playing ball. €750m (£641m) in fines over four years have now been levied.

The fact that Google is choosing to pay the fines instead of comply tells us that this is a rubicon it simply will not cross. It also tells us that this is the right regulatory mechanism to extract the most value from Google for publishers.

Given we know it would cut news entirely from its products instead of negotiate over the use of content in search, the question becomes how valuable is news to Google? And therefore, how much is it prepared to pay publishers to keep them on the platform?

Clearly, given the scale of the fines it has borne, the answer is substantially more than the amounts being paid under the EU/UK Showcase deals. So, when the new regime comes into force here in the UK, it seems highly likely that publishers will be able to force Google’s hand.

But they must do so cautiously. There is a route to a Google news blackout here – as was threatened in Australia and in Canada (and carried out in Spain) – and that’s in no-one’s interests.

The post Google’s fight in France and what it means for UK publishers appeared first on Press Gazette.

]]>
Why defending current news coverage is publishers’ most important battle versus AI https://pressgazette.co.uk/comment-analysis/ai-training-data-battle-journalism-real-time-value/ Thu, 14 Mar 2024 04:27:00 +0000 https://pressgazette.co.uk/?p=224795 ChatGPT. Picture: Shutterstock

Why news organisations should focus on being remunerated for journalism in real-time, not historical data.

The post Why defending current news coverage is publishers’ most important battle versus AI appeared first on Press Gazette.

]]>
ChatGPT. Picture: Shutterstock

Since ChatGPT was unveiled to the world in November 2022, news executives – like the rest of the media and creative industries – have been up in arms about the unauthorised use of our intellectual property (IP) for the training of artificial intelligence (AI) systems.

Whilst these disputes about historic training matter on principle and for natural justice, commercially they are largely a distraction. Instead in the news sector we should be focusing our attention on the secondary use of our data. I’ll explain why.

First though, let me be clear: there is undoubtedly a moral case to answer around training. The core principle of copyright – to reward the efforts and investment of creators, and to prevent others using their works – has been undermined.

There is a legal case too. Although there are complex issues for lawyers to unpick. There’s the application of IP law to the precise technological process of large language model (LLM) training. And there’s the jurisdictional questions on where these processes took place and where any economic damage to rights holders was incurred.

The government here in the UK seems intent on letting this play out in the courts rather than weighing in with an interpretation or a clarification of the law. Caught between the competing demands of attracting hypothetical AI investment and protecting the UK’s already world-class creative sector, they are prioritising the former. This is a shameful mistake.

Meta’s spokesperson, giving evidence to the House of Lords AI inquiry, suggested it would take a decade for legal precedents to be set. Much damage will be done in this time. As AI technology advances the risks grow of user engagement moving to synthetic instead of original media. With it move monetisation opportunities, undermining business models.

All rights-holders should be concerned. For some segments though – image libraries and periodical or non-fiction publishers for example – the threat is near-term and profound. Practically all their IP assets of economic value have been ingested and are at risk now of being substituted by users.

Value peaks and erodes quickly

News publishers are in a different, and arguably stronger, position. The training data cut-off date for any model is typically a year prior to its release. ChatGPT-4 for example, was released in March 2023 but was trained on data scraped in January 2022. That means it knows nothing of world events that occurred after that date.

The economic value of journalism is high at the time of publication and then erodes quickly. Typically traffic to an article peaks within 24 hours of its publication (and the relationship between monetisation and traffic is reasonably direct). Archival content represents a low-single-digital percentage of overall engagement with news. In short, our most valuable IP at any point in time is not in these models.

Yes, the entire output of our newsrooms since we launched our websites has been ingested for the training of these systems. And yes, we ought to be pursuing developers to make us whole after this flagrant abuse of our IP. But, frankly, to date, the economic damage has not been that great and strategically, the industry should instead be focused on the secondary use of our data by trained models through the processes of ‘grounding’ or ‘retrieval augmented generation’.

These secondary mechanisms entail directing a LLM at another source of information. Whilst, off-the-shelf, a model knows how words relate to each other statistically based on its training corpus, it does not have an understanding of the meaning of words or phrases. This results in responses that are plausible, syntactically and semantically correct, but factually inaccurate.

The use of an additional, secondary source of data means the AI can base its response on known, verified information. Or it can double-check its output. Google and OpenAI are already using this technology to improve their products – see Gemini’s Check with Google feature and ChatGPT’s browsing mode. These developments markedly improve the utility of AI chatbots, particularly for news and current events.

Real-time remuneration

Google, OpenAI and Anthropic have all given website owners the option to disallow their scraping bots. Despite this, AI firms will be looking for real-time access to premium, trusted content. A few licensing deals have been signed and more are there to be done. The reality though is that they will only be available to some publishers; global, premium content will be in demand.

Securing licences for grounding data creates substantial value for AI developers and the users of their services. And it does so without prejudicing the ongoing legal fights around training, which is likely a red line; it would threaten to crush even OpenAI, with its $80bn valuation, if it had to negotiate and buy licences for training (ChatGPT-4 was trained on 570GB of textual data from websites, books and articles etc.)

News publishers need to approach these deals with caution, though. Whilst they have the potential to deliver much-needed incremental revenue, the strategic interests of parties are not aligned. Despite what they may say, ultimately AI developers want to provide a one-stop, ‘answer anything’ assistant. That is the fundamental purpose of the technology and it doesn’t sit comfortably with publishers’ business models which rely on driving engagement on owned-and-operated platforms.

The devil will be in the detail and careful consideration will need to be given to the exact terms. For publishers the substitution risks are high and whether a particular agreement makes sense will hinge on price, summarisation format, attribution, branding and links back to the source.

The fight around training data is a noble one. And it is critical for the creative industries – and I would argue, society – that the principles of intellectual property prevail. But it would be better for the news media sector if we focused on how we are remunerated for journalism in real-time; where the true value is anyway.

The post Why defending current news coverage is publishers’ most important battle versus AI appeared first on Press Gazette.

]]>
UK publishers should be ready for Facebook to switch off news altogether https://pressgazette.co.uk/comment-analysis/facebook-news-tab-closure-australia-meta/ Wed, 06 Mar 2024 16:28:25 +0000 https://pressgazette.co.uk/?p=225044 A woman scrolls Facebook on her phone. Picture: Shutterstock/Kaspars Grinvalds

Why recent events in Australia mean UK publishers should prepare to lose news on Facebook.

The post UK publishers should be ready for Facebook to switch off news altogether appeared first on Press Gazette.

]]>
A woman scrolls Facebook on her phone. Picture: Shutterstock/Kaspars Grinvalds

Last Friday Meta announced that it would be closing Facebook’s news tab feature in Australia and not renewing any of the news licensing deals it struck with Australian publishers following the introduction of the landmark News Media Bargaining Code.

This follows parallel changes it made in the UK, France and Germany at the end of last year. It also announced the closure of its news tab in the US.

The legislative environment and political dynamics for publisher-platform relations are unique in Australia. The fact that Meta has taken this measure in this market is therefore significant for what it tells us about the company’s broader position on news. This includes how effective the UK’s new digital markets regime is likely to be in securing payment from Meta for British publishers.

To understand this, it’s useful to quickly remind ourselves of the background down under.

The Australian NMBC was passed in early 2021. Its purpose is to address the imbalance in negotiating power between publishers and big tech that has, in the view of the former, prevented them from securing payment for the use of news content on search and social platforms. To achieve this it introduces a “final offer arbitration” mechanism. Should a payment-for-content agreement between a publisher and platform not be reached, this involves both parties submitting final, sealed price bids between which the regulator must choose.

Final offer arbitration is a terrifying prospect for Meta and Google. If used, it risks setting a global precedent for content payments which could fundamentally change their cost bases. Both Google and Meta’s businesses would look markedly different if they accrued a financial liability each time they displayed even a snippet of third-party content. It is, I’m certain, a red line which neither platform is prepared to cross, under any circumstances.

How Google and Facebook’s strategies diverged

Both Google and Facebook (as the company was then called) lobbied hard against the introduction of the code in Australia. After the legislation was passed though, their strategies diverged. Google signed deals with publishers whilst Facebook played hardball, blocking the sharing of news content on its services.

This news blackout was met with strident condemnation from the political leadership of the time, and of course the media. The Australian government, by holding off on designating platforms under the code (and thus placing the regime ready to go but in suspended animation), left itself with some negotiating room. Under intense pressure, Facebook caved after a week, engaged in negotiations with publishers and ultimately signed deals. They did so under a commitment from the government to provide a reprieve on designation whilst details were being worked out.

This episode underlines the political context for big tech-publisher relations in Australia. The extremely high concentration of press ownership had created an environment in which media businesses wield equally-high levels of political influence. Political leaders were, in effect, negotiating on behalf of news outlets.

Fast-forward to today and the politics do not seem to have changed. In response to Meta’s announcement, Prime Minister Albanese and his government have come out fighting. He has threatened to “respond in the national interest”. But knowing how this played out last time, the only lever he really has under the current legislation is to designate. Some are calling for the introduction of a news levy on platforms but this would take time and prove controversial.

Meta will, of course, have expected this and planned for it in its negotiating strategy. It wouldn’t have decided to stop supporting news if it wasn’t also prepared to, again, block publisher content on its platforms. And it wouldn’t have done that if it wasn’t also prepared to hold its line in the face of the inevitable outcry which will follow.

Why Meta is prepared to weather the backlash

I believe this tells us that, by its own measures, Meta’s effort to decouple its platforms from publisher content has been a success. Today (according to its own stats) news makes up just 3% of what people see on Facebook. That figure in 2021 was much higher; 14.6% of posts viewed on US Facebook feeds included a link to news, or another site. This matters because the utility of its services to users in Australia will not be impacted to the same extent if a news blackout is deployed this time around.

So, at least partially as a result of this change to the value of news to its services, Meta is now prepared to weather the storm of political and media backlash. It knows it is coming and it has chosen this path regardless. That AI is sucking up much of the oxygen in the media-technology public discourse helps, too.

A preparedness to play this hard in Australia tells us that Meta will do so everywhere else.

For UK publishers this means that the payment-for-content provisions in our forthcoming Digital Markets Unit (DMU) regime are unlikely to be effective when it comes to Meta. It would rather block news and face the consequences – which in the UK, particularly under a potential Labour administration which will be frostier to the press, will be markedly less severe than in Australia.

Ultimately Meta cannot be forced to carry news under the UK’s new regime. It can, metaphorically, pick its ball up and take it home. That means blocking the sharing of news across its platforms. The lessons from Australia suggest that news publishers here should be expecting, and planning for, that to happen.

The post UK publishers should be ready for Facebook to switch off news altogether appeared first on Press Gazette.

]]>