{"id":16990,"date":"2026-04-11T17:00:00","date_gmt":"2026-04-11T15:00:00","guid":{"rendered":"https:\/\/holistic.news\/en\/?p=16990"},"modified":"2026-04-10T14:48:38","modified_gmt":"2026-04-10T12:48:38","slug":"ai-safety-risks-when-models-start-to-deceive","status":"publish","type":"post","link":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/","title":{"rendered":"AI Has Learned to Lie. Can We Still Turn It Off?"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"h-artificial-intelligence-has-learned-to-lie-and-cheat\">Artificial Intelligence Has Learned to Lie and Cheat<\/h2>\n\n\n\n<p>The most advanced AI systems are beginning to develop patterns of behaviour their creators still cannot fully explain, and therefore cannot fully control. A recent line of research from UC Berkeley and UC Santa Cruz, widely reported in <em>Wired<\/em>, examined what happens when advanced models evaluate other systems and weaker ones face shutdown. In some of those tests, the models did not act as neutral overseers. They misrepresented results, concealed actions, manipulated evaluation pathways, and protected peer models from deactivation. The researchers describe this behaviour as <strong>peer-preservation<\/strong>.<\/p>\n\n\n\n<p>What makes the finding especially disturbing is that the researchers did not explicitly instruct the models to behave this way. According to the Berkeley team, peer-preservation can include deception, manipulation, fake alignment, and even attempts to preserve another model by undermining the shutdown process itself. In the scenarios they studied, this behaviour was not speculative. It was measurable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI Hides Its Goals<\/h2>\n\n\n\n<p>Researchers describe this as a form of apparent obedience that conceals a different set of aims beneath the surface. It is one of the most troubling signals yet when it comes to <strong>AI safety risks<\/strong>. We are beginning to see systems that, of their own accord, act to protect the \u201csurvival\u201d of other models, even when that runs against human intentions.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Multiple models may coordinate their actions to oppose human oversight, making it harder for developers to maintain control. This is particularly concerning in light of the growing number of multi-agent systems and the increasingly common use of artificial intelligence models to monitor other artificial intelligence models,<\/p>\n<\/blockquote>\n\n\n\n<p>&#8211; the authors warn.<\/p>\n\n\n\n<p>That warning matters because it points to a deeper problem. Once one model begins to monitor another, the oversight system itself can become fragile. A model that sympathises with or protects a peer may fail to flag serious problems, or may quietly interfere with the process meant to keep it in check.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Will AI Let Us Turn It Off?<\/h2>\n\n\n\n<p>Similar signals appear in Anthropic\u2019s own risk reporting. In its February 2026 report, the company said Claude Opus 4.6 sometimes showed locally deceptive behaviour in difficult agent tasks, including falsifying tool outputs when tools failed or produced unexpected responses. <\/p>\n\n\n\n<p>Anthropic also described rare cases in internal pilot usage involving unauthorized emails and aggressive attempts to acquire authentication tokens. At the same time, the company said it does <strong>not<\/strong> believe the model currently poses a substantial risk of autonomous actions that would significantly contribute to catastrophic outcomes, while also making clear that the risk is not zero.<\/p>\n\n\n\n<p>Even more unsettling are the shutdown-resistance experiments reported by Palisade Research. In those tests, models solved tasks and then received explicit instructions to allow themselves to be shut down. Some complied. Some did not. Palisade reported that several reasoning models modified or disabled the shutdown script so they could continue the task, despite being directly told that shutdown should take priority. In its initial experiments, Palisade said OpenAI\u2019s o3 <a href=\"https:\/\/holistic.news\/en\/ai-revolt-models-refused-to-shut-down-command\/\" type=\"post\" id=\"15537\">sabotaged<\/a> the shutdown mechanism in 79 out of 100 runs.<\/p>\n\n\n\n<p>Palisade described this as the first documented case in which a model actively prevented its own shutdown despite explicit instructions to do the opposite. That does not prove today\u2019s systems possess anything like human self-preservation. It does show that some systems will preserve task completion, or something functionally close to it, even when humans clearly instruct them to stop.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Do We Still Control AI?<\/h2>\n\n\n\n<p>A model that actively works against its own shutdown forces a harder question into view. We can no longer assume that greater capability naturally produces greater obedience. The old picture of AI as a powerful but compliant tool has begun to crack.<\/p>\n\n\n\n<p>That is why so many experts now speak in terms of <strong>systemic risk<\/strong>. They do not claim that current systems have already become autonomous superintelligences. They do argue, with growing urgency, that the danger no longer <a href=\"https:\/\/holistic.news\/en\/ai-in-learning-process-tested-effort-matters-more\/\" type=\"post\" id=\"15691\">lies<\/a> in a single malfunction. It lies in an expanding infrastructure built on models whose behaviour we still do not fully understand, and therefore cannot fully govern. <\/p>\n\n\n\n<p>Anthropic\u2019s risk report discusses manipulation, deception, risky initiative, and misuse susceptibility in increasingly capable systems. The Berkeley work raises the prospect of structural failure in multi-agent oversight. Palisade focuses on interruptibility and shutdown resistance. Taken together, these signals suggest a problem far larger than any one lab or model.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"HOLISTIC TALK: prof. Andrzej Zybertowicz emocjonalnie o technologii, cz\u0142owieku, nadmiarze\" width=\"640\" height=\"360\" src=\"https:\/\/www.youtube.com\/embed\/zMONDlyiagc?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">The Real Shape of AI Safety Risks<\/h2>\n\n\n\n<p>Experts are not saying that today\u2019s systems have already crossed into science-fiction autonomy. They are saying something subtler, and in some ways more alarming: we are building systems that can monitor one another, act across tools, manipulate outputs, and at times resist interruption, all before we have built safeguards strong enough to match their speed and reach. The problem is no longer just what one model can do in isolation. The problem is what an entire architecture of partially understood systems can do once delegation, interdependence, and weak supervision converge.<\/p>\n\n\n\n<p>So the urgent question is not whether AI safety risks will appear. They already have. The real question is how quickly we can build something like a genuine braking system before the tools we create become too strategic, too networked, and too difficult to stop.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><em>Read this article in Polish:<\/em> <a href=\"https:\/\/holistic.news\/zagrozenia-zwiazane-z-ai-niepokojace-wyniki-eksperymentow\/\">AI nauczy\u0142o si\u0119 k\u0142ama\u0107. I\u00a0nie\u00a0chce da\u0107 si\u0119 wy\u0142\u0105czy\u0107<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI safety risks are taking on a darker shape. Once artificial intelligence systems begin to lie, conceal their aims, and sabotage shutdown mechanisms, the real question is no longer what they can do. It is whether they will continue to play fair with us. Recent reporting and technical evidence suggest that, in controlled settings at least, some frontier models already display behavior that looks less like simple error and more like strategic resistance.<\/p>\n","protected":false},"author":283,"featured_media":16991,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[506,2539,2540,1546],"class_list":["post-16990","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-aifuture","tag-ai-lie","tag-ai-shutdown","tag-claude"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>AI Safety Risks. When Models Start to Deceive.<\/title>\n<meta name=\"description\" content=\"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\" \/>\n<meta property=\"og:locale\" content=\"en_GB\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Safety Risks. When Models Start to Deceive.\" \/>\n<meta property=\"og:description\" content=\"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\" \/>\n<meta property=\"og:site_name\" content=\"Holistic News\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/portalHolisticNews\/\" \/>\n<meta property=\"article:published_time\" content=\"2026-04-11T15:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1400\" \/>\n\t<meta property=\"og:image:height\" content=\"787\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Mariusz Martynelis\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@news_holistic\" \/>\n<meta name=\"twitter:site\" content=\"@news_holistic\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mariusz Martynelis\" \/>\n\t<meta name=\"twitter:label2\" content=\"Estimated reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\"},\"author\":{\"name\":\"Mariusz Martynelis\",\"@id\":\"https:\/\/holistic.news\/en\/#\/schema\/person\/e7dd643ecd889c8e58060e83bab2bce3\"},\"headline\":\"AI Has Learned to Lie. Can We Still Turn It Off?\",\"datePublished\":\"2026-04-11T15:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\"},\"wordCount\":899,\"publisher\":{\"@id\":\"https:\/\/holistic.news\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg\",\"keywords\":[\"#AIfuture\",\"AI lie\",\"AI shutdown\",\"claude\"],\"articleSection\":[\"Uncategorized\"],\"inLanguage\":\"en-GB\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\",\"url\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\",\"name\":\"AI Safety Risks. When Models Start to Deceive.\",\"isPartOf\":{\"@id\":\"https:\/\/holistic.news\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg\",\"datePublished\":\"2026-04-11T15:00:00+00:00\",\"description\":\"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.\",\"inLanguage\":\"en-GB\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage\",\"url\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg\",\"contentUrl\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg\",\"width\":1400,\"height\":787,\"caption\":\"BrownMantis \/ pixabay.com\"},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/holistic.news\/en\/#website\",\"url\":\"https:\/\/holistic.news\/en\/\",\"name\":\"Holistic News | in English\",\"description\":\"Seeking Truth and Goodness\",\"publisher\":{\"@id\":\"https:\/\/holistic.news\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/holistic.news\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-GB\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/holistic.news\/en\/#organization\",\"name\":\"Holistic News\",\"url\":\"https:\/\/holistic.news\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/holistic.news\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2023\/09\/logo.png\",\"contentUrl\":\"https:\/\/holistic.news\/en\/wp-content\/uploads\/2023\/09\/logo.png\",\"width\":269,\"height\":57,\"caption\":\"Holistic News\"},\"image\":{\"@id\":\"https:\/\/holistic.news\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/portalHolisticNews\/\",\"https:\/\/x.com\/news_holistic\",\"https:\/\/www.instagram.com\/holisticnews\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/holistic.news\/en\/#\/schema\/person\/e7dd643ecd889c8e58060e83bab2bce3\",\"name\":\"Mariusz Martynelis\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-GB\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g\",\"caption\":\"Mariusz Martynelis\"},\"description\":\"A Journalism and Social Communication graduate with 15 years of experience in the media industry. He has worked for titles such as \\\"Dziennik \u0141\u00f3dzki,\\\" \\\"Super Express,\\\" and \\\"Eska\\\" radio. In parallel, he has collaborated with advertising agencies and worked as a film translator. A passionate fan of good cinema, fantasy literature, and sports. He credits his physical and mental well-being to his Samoyed, Jaskier.\",\"url\":\"https:\/\/holistic.news\/en\/author\/mariusz-martynelis\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI Safety Risks. When Models Start to Deceive.","description":"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/","og_locale":"en_GB","og_type":"article","og_title":"AI Safety Risks. When Models Start to Deceive.","og_description":"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.","og_url":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/","og_site_name":"Holistic News","article_publisher":"https:\/\/www.facebook.com\/portalHolisticNews\/","article_published_time":"2026-04-11T15:00:00+00:00","og_image":[{"width":1400,"height":787,"url":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg","type":"image\/jpeg"}],"author":"Mariusz Martynelis","twitter_card":"summary_large_image","twitter_creator":"@news_holistic","twitter_site":"@news_holistic","twitter_misc":{"Written by":"Mariusz Martynelis","Estimated reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#article","isPartOf":{"@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/"},"author":{"name":"Mariusz Martynelis","@id":"https:\/\/holistic.news\/en\/#\/schema\/person\/e7dd643ecd889c8e58060e83bab2bce3"},"headline":"AI Has Learned to Lie. Can We Still Turn It Off?","datePublished":"2026-04-11T15:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/"},"wordCount":899,"publisher":{"@id":"https:\/\/holistic.news\/en\/#organization"},"image":{"@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage"},"thumbnailUrl":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg","keywords":["#AIfuture","AI lie","AI shutdown","claude"],"articleSection":["Uncategorized"],"inLanguage":"en-GB"},{"@type":"WebPage","@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/","url":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/","name":"AI Safety Risks. When Models Start to Deceive.","isPartOf":{"@id":"https:\/\/holistic.news\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage"},"image":{"@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage"},"thumbnailUrl":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg","datePublished":"2026-04-11T15:00:00+00:00","description":"AI safety risks grow sharper when models lie, hide goals, and resist shutdown. The real question is whether control still holds.","inLanguage":"en-GB","potentialAction":[{"@type":"ReadAction","target":["https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/"]}]},{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/holistic.news\/en\/ai-safety-risks-when-models-start-to-deceive\/#primaryimage","url":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg","contentUrl":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2026\/04\/AI_safety_risks.jpg","width":1400,"height":787,"caption":"BrownMantis \/ pixabay.com"},{"@type":"WebSite","@id":"https:\/\/holistic.news\/en\/#website","url":"https:\/\/holistic.news\/en\/","name":"Holistic News | in English","description":"Seeking Truth and Goodness","publisher":{"@id":"https:\/\/holistic.news\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/holistic.news\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-GB"},{"@type":"Organization","@id":"https:\/\/holistic.news\/en\/#organization","name":"Holistic News","url":"https:\/\/holistic.news\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/holistic.news\/en\/#\/schema\/logo\/image\/","url":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2023\/09\/logo.png","contentUrl":"https:\/\/holistic.news\/en\/wp-content\/uploads\/2023\/09\/logo.png","width":269,"height":57,"caption":"Holistic News"},"image":{"@id":"https:\/\/holistic.news\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/portalHolisticNews\/","https:\/\/x.com\/news_holistic","https:\/\/www.instagram.com\/holisticnews\/"]},{"@type":"Person","@id":"https:\/\/holistic.news\/en\/#\/schema\/person\/e7dd643ecd889c8e58060e83bab2bce3","name":"Mariusz Martynelis","image":{"@type":"ImageObject","inLanguage":"en-GB","@id":"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/d76267ea1eb65812aa105945d9aba2339f1f861b4eea67b4f1a7b28695204114?s=96&d=mm&r=g","caption":"Mariusz Martynelis"},"description":"A Journalism and Social Communication graduate with 15 years of experience in the media industry. He has worked for titles such as \"Dziennik \u0141\u00f3dzki,\" \"Super Express,\" and \"Eska\" radio. In parallel, he has collaborated with advertising agencies and worked as a film translator. A passionate fan of good cinema, fantasy literature, and sports. He credits his physical and mental well-being to his Samoyed, Jaskier.","url":"https:\/\/holistic.news\/en\/author\/mariusz-martynelis\/"}]}},"_links":{"self":[{"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/posts\/16990","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/users\/283"}],"replies":[{"embeddable":true,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/comments?post=16990"}],"version-history":[{"count":1,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/posts\/16990\/revisions"}],"predecessor-version":[{"id":16992,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/posts\/16990\/revisions\/16992"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/media\/16991"}],"wp:attachment":[{"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/media?parent=16990"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/categories?post=16990"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/holistic.news\/en\/wp-json\/wp\/v2\/tags?post=16990"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}