Explore more publications!

Is That Real? A Guide to Identifying Fake Wildlife Videos Created with Generative AI

Example of what an AI-generated image could look like.

Author’s Note: despite resemblance to current GenAI visuals, this image by user sunny305 was published in 2021, prior to the AI boom, and is likely representative of the skill of digital Photoshop artists.

In the head-spinning, ever-expanding world of generative AI, a particularly popular niche is being cultivated that could cause long-term damage to wildlife and the way humans interact with them: fake animal videos.

Picture seeing this on your feed: It’s dark, in a fenced backyard, and nocturnal animals are out to explore. In the video, a group of bunnies on the edge of a trampoline investigate the surface, venturing forward, and then, realizing the springiness of the black mesh, begin to bounce. Soon the whole group is bouncing with enthusiasm.

But, there are some issues. What seemed to be seven bunnies at the beginning of the video turns into six by the time they’re bouncing. They’re unusually blond for wild rabbits, and their color patterns seem to disappear. In fact one of the bouncing bunnies does disappear! What eerie, sinister rabbits are these, who morph their shapes and flicker out of existence?

This video isn’t real; it was generated by a computer in what is now known as generative artificial intelligence. Generative AI is rooted in large language models, which take in massive amounts of data to make a predictive guess on what to generate based on the examples in its model. A large language model doesn’t “think,” any more than any other software does.

The results of an LLM’s guesses can be unexpected. Hence, rabbit ears that are reabsorbed into another rabbit’s fluffy butt—the model doesn’t know that this shouldn’t occur, only that when many rabbits are crowded together, ears unattached to a rabbit’s head may appear over the other’s back. 

A large language model doesn’t think—it reproduces what it has seen from the data it’s been given, and so has less object permanence than a baby.

These types of videos are racking up millions of impressions and being posted on social media sites by thousands of social media “creators” every month. Some social media apps such as Facebook and Instagram have posted policies requiring users to label AI-created images and videos as made with AI, others such as X do not currently require all users to do so.

However, enforcement by the social media companies is often scattershot and many users deceive others by hiding the required “made with AI” lines at the very end of a line of text or don’t follow the rules at all. For example, this Instagram video of eagles stepping in wet concrete as construction workers look on has no clear “made with AI” identifier and the caption makes it seem as if it really happened, despite being posted on Instagram.

When large language models make mistakes that it assumes are factual, the technical term for these mistakes are “hallucinations.” Studies have shown that hallucinations are inevitable with the nature of the technology, according to a 2025 study by researchers at the University of Singapore. Because a large language model must act on its own prediction to proceed, errors build on each other, and newer generative AI models are showing more errors rather than fewer.

System Overload

While it may seem innocuous, the misinformation spread by these kinds of posts causes more harm than can be seen from the surface.

Cultural depictions of animals can sway the public’s feelings about wildlife. The movie “Jaws,” released 51 years ago, contributed to longstanding negative perceptions of sharks and may have influenced an uptick in the killing of sharks. Today, fake content about wild animals that proliferates on social media could give people the wrong ideas about how animals behave or how to interact with nature safely.

Most of us going for a hike don’t expect to encounter a possibly dangerous animal, but if we did, would we know how to respond? Even if you knew, what about your neighbor or older uncle? If they saw a video of a grizzly bear licking a kitten, would they know this is not a likely occurrence? Would a child who has grown up with feel-good fake animal videos their whole life be able to guess this is not real?

The price of a generative AI video like this could be a life.

Dangers like these are already apparent in the spawning of hoards of AI-written mushroom guides. As any forager knows, identification of edible foods in the wild is already a serious process, and no single item more so than mushrooms. North America has several deadly mushroom species that look nearly identical to edible ones and can only be told apart by the most experienced mycologists. A recent alarming uptick in mushroom poisonings in California underlines the danger; such poisonous fungi can kill in the space of hours and may cause irreparable damage to your body if you survive.

It’s not just safety that this kind of information can affect; our sense of awe and wonder in the world around us is an unintended casualty of the proliferation of convincing fakes. Creativity has been shown to suffer too as a side effect of using AI tools; why think when a machine is doing it for you?

Reading Between the Code

There are some common ways to tell whether the video you’re watching is generative AI:

Video length: Most generative AI video sources can only generate 30 seconds of video at a time and have little consistency. LLMs often don’t have much memory, and each prompt given to the software will result in a new video. Thus the model will slightly, or even dramatically, change the appearance of characters in the video from prompt to prompt or even from scene to scene. Some video prompters have come up with complicated workarounds for consistency, but for the time being, most accounts intentionally posting generative AI videos won’t go through that amount of work.

Consider the source: Often AI accounts will post multiple versions of similar videos, with the hope that one will get views and likes, leading to monetization. Does the source have many similar videos, or does the video seem to have an agenda? Is the source a new account or one that doesn’t seem able to respond? Many of these types of accounts across platforms profit from clickbait and false information.

Visual clues: A few years ago, counting the fingers on the hand of a suspected AI photo was an easy way to tell a fake image. While large language models have gotten more complex,  glitches do happen, as in the bunny-trampoline video. Mistakes may also be more subtle, so here are some questions to ask yourself:

  • Do the colors, size, or movement of the animal appear natural? 
  • What about the setting? 
  • Do straight lines like teeth, bricks, tiles, or walls blur or disappear?
  • Can you tell where the light in the video is coming from (e.g., the sun or a lamp)?
  • Does the direction of the light change? Do shadows move on their own?
  • Does the video quality make sense? One of the reasons the bunny video fools us is that it looks like security footage, and we expect low resolution.

Date of media: If a video or image was posted before 2022, there is a much higher chance of it being authentic. Before easy public access to generative AI tools, making a convincing fake image required significantly more work on the part of an individual.

Reverse Image Search: There are several versions of this on different search engines, but the idea is that if you put in an image, it will pull up all the sites where the image can be found. This is useful for finding the first time it may have been posted, which can help you determine if the image is real, or possibly a fake that resurfaces every few years.

Content of video: Ask yourself if the behavior makes sense. If this factually happens in nature, there will likely be other videos or writing about it. Consult expert sites and reliable forum posts to determine what may really happen; search engines may end up bringing up sites and pages that only exist for clicks, so try to use verified sources as much as possible. AI detectors, unfortunately, aren’t always accurate and appear to become less accurate over time.

Trustworthy Sources

The best way to determine the reality of what you see is to have places to turn for expert opinions.

Here are some commonly used (even by wildlife biologists) resources for identifying wildlife and learning about behavior:

  • iNaturalist: iNaturalist is a community science app and website where anyone can upload photos from animals to plants to fungi and rely on real people to assist them with an identification, often very quickly. iNaturalist also offers the ability to look through hundreds of photos of common wildlife, allowing someone seeking an ID to see unusual traits that might occur in a species. Did you see a molting screech owl? There will be pictures of one for you to compare it to!

 

  • Merlin: Merlin is a bird-specific app and website in a similar vein to iNaturalist, but which has birdsong samples available to verify your identification. It’s a favorite of hardcore birders and has earned the reputation.

 

  • Maryland Biodiversity Project: Since 2012, MBP has had the mission of cataloguing the life found in the borders of our state, and they are thorough!

 

 

 

  • HerpMapper is similar to iNaturalist, but for amphibians and reptiles. 

 

  • Local Wildlife Groups: Your biggest asset in determining you’re getting correct information is the experience of people well-versed in their local wildlife. Compiling your groups’ collective knowledge will often supersede researching on your own; for example, a birding group with variable experience may together have over a century of experience. Humans have always shared collective knowledge, and times like this show how important that habit is.

Referencing these sources for ID takes longer than asking an AI assistant, but is well worth the effort and allows you to learn new things in the process.

Be vigilant when you see something you aren’t sure of, and if you think it might be AI, don’t share it. Sharing reinforces and spreads misinformation and encourages the creation of new posts. Social media and AI companies make significant money off our usage, even when we don’t ask for it (and many people don’t). It’s easy to blame others for sharing, but remember that they are facing the same uncertainty we are.

The real world, and the real wonders found in it, are worth fighting for. Surrounded by so much that is artificial, make sure that you’re appreciating nature that is real. There’s great wildlife material out there, without resorting to crude imitations and impossible bunnies.


Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions