Instagram is making a more sheltered online experience the new normal for its teenage users. The platform is introducing a PG-13 inspired content moderation system that will be automatically enabled for everyone under the age of 18.
This new “13+” setting is designed to be a protective default. It ensures that, without any action needed from parents, teens are already in a more filtered environment. If a teen wants to see a broader range of content, they must get their parent’s explicit approval.
The system will target a wider variety of “borderline” content. This includes posts with profanity, risky stunts, and themes that could be seen as encouraging harmful behavior. Instagram will also proactively block searches for sensitive terms, making it harder for teens to stumble upon or seek out inappropriate material.
This move is a direct response to a climate of intense scrutiny. An independent report, co-authored by a former Meta insider, recently concluded that Instagram’s safety tools were failing. This new, more restrictive default is Meta’s most visible effort to change that narrative.
The feature will be rolled out first in the US, UK, Australia, and Canada. While Meta presents it as a major step forward for teen safety, campaigners are cautious, stressing that past promises have fallen short and that the effectiveness of this new system must be independently verified.