-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
62 lines (62 loc) · 24 KB
/
index.html
File metadata and controls
62 lines (62 loc) · 24 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
<!doctype html><html lang=en-us><head><meta name=generator content="Hugo 0.158.0"><meta charset=utf-8><meta name=viewport content="width=device-width,initial-scale=1"><title>Manuel Sam Ribeiro</title><meta name=description content="Personal website of Manuel Sam Ribeiro, a Senior Applied Scientist at Amazon AGI working on multimodal LLMs, speech-to-speech systems, and spoken conversational AI. PhD in Speech and Language Processing, University of Edinburgh."><meta name=theme-color content="#F4F2EF" media="(prefers-color-scheme: light)"><meta name=theme-color content="#1A1816" media="(prefers-color-scheme: dark)"><link rel=preconnect href=https://fonts.googleapis.com><link rel=preconnect href=https://fonts.gstatic.com crossorigin><link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600&family=Merriweather:ital,wght@0,400;0,700;1,400&display=swap" rel=stylesheet><link rel=icon type=image/png href=/favicon-96x96.png sizes=96x96><link rel=icon type=image/svg+xml href=/favicon.svg><link rel="shortcut icon" href=/favicon.ico><link rel=apple-touch-icon sizes=180x180 href=/apple-touch-icon.png><link rel=manifest href=/site.webmanifest><link rel=stylesheet href=https://msamribeiro.com/css/main.min.7a629342f6288c14bf6d6725c213a3ea116555982a7584b5704343333f8f6a4f.css integrity="sha256-emKTQvYojBS/bWclwhOj6hFlVZgqdYS1cENDMz+Pak8="><link rel=stylesheet href=https://msamribeiro.com/vendor/katex/katex.min.css><script defer src=https://msamribeiro.com/vendor/katex/katex.min.js></script><script defer src=https://msamribeiro.com/vendor/katex/auto-render.min.js></script></head><body data-theme=light><div class=page-shell><nav class=morandi-nav><div class=brand><span class=dot></span>
<a href=https://msamribeiro.com/>Manuel Sam Ribeiro</a></div><div class=nav-links><a href=https://msamribeiro.com/#experience>Experience</a>
<a href=https://msamribeiro.com/#education>Education</a>
<a href=https://msamribeiro.com/#research>Research</a>
<a href=https://msamribeiro.com/#writing>Blog</a></div><button class=theme-toggle data-theme-toggle aria-label="Toggle theme">
<span class=icon data-icon-sun>☼</span>
<span class=icon data-icon-moon>☾</span></button></nav><main class=main-container><header class="hero section-block"><div id=hero class=hero-copy><div class=hero-header><div class=hero-header-text><h1>Manuel Sam Ribeiro</h1><p class=meta>Senior Applied Scientist<br><span class=meta-affiliation>Amazon AGI · Gdańsk, Poland</span></p></div><div class=hero-media><div class=hero-media-img--light style=background-image:url(/images/profile-light.png) role=img aria-label="Manuel Sam Ribeiro portrait"></div><div class=hero-media-img--dark style=background-image:url(/images/profile-dark.png) role=img aria-label="Manuel Sam Ribeiro portrait"></div></div></div><div class=summary>I am a <span class=text-accent>speech and language researcher</span> working at the boundary between research ambition and production reality. I am currently a Senior Applied Scientist at Amazon AGI, where I build <span class=text-accent>multimodal LLMs</span> and <span class=text-accent>spoken conversational systems</span>. Previously, I was at Apple and Microsoft developing <span class=text-accent>speech synthesis</span> and <span class=text-accent>speech recognition</span> products; and at the <span class=text-accent>University of Edinburgh</span>, where I completed my PhD and held a senior postdoctoral research position.</div><div class=hero-chips><div class=hero-chips-label>Research Interests</div><div class=hero-chips-list><span class=hero-chip>Multimodal LLMs</span>
<span class=hero-chip>Spoken conversational AI</span>
<span class=hero-chip>Speech synthesis & voice conversion</span>
<span class=hero-chip>Speech-to-speech systems</span></div></div><div class=hero-links><a href=https://www.linkedin.com/in/msambentoribeiro class=text-link target=_blank rel=noopener>LinkedIn</a>
<a href=https://github.com/msamribeiro class=text-link target=_blank rel=noopener>Github</a>
<a href="https://scholar.google.com/citations?user=VdV_-40AAAAJ" class=text-link target=_blank rel=noopener>Google Scholar</a>
<a href=https://www.amazon.science/author/sam-ribeiro class=text-link target=_blank rel=noopener>Amazon Science</a></div></div></header><section id=experience class=section-block><div class=section-heading><span class=bar></span><div><h2>Experience</h2></div></div><div class=collapsible data-expandable data-limit=3><div class=exp-list data-expandable-content><article class="exp-item exp-item--current"><div class=exp-header><span class=exp-org>Amazon AGI · Gdańsk, Poland</span>
<span class=exp-period>2021 — present</span></div><div class=exp-roles><div class=exp-role-row><span class=exp-role-title>Senior Applied Scientist</span>
<span class=exp-role-period>Apr 2025 — present</span>
<span class=exp-now>now</span></div><div class=exp-role-row><span class=exp-role-title>Applied Scientist</span>
<span class=exp-role-period>2021 — Apr 2025</span></div></div><p class=exp-desc>I currently lead applied research on large foundational models for spoken conversational agents and speech-to-speech systems, with a current focus on language expansion and multilingual voices. I owned the speech generation technical roadmap for speech-to-speech, from early exploration to product launch, delivering SOTA voices in 5 languages. Earlier, as technical lead, I brought TTS voices, voice conversion systems, and ultra-lightweight on-device voices from research to production, built on limited training data.</p></article><article class=exp-item><div class=exp-header><span class=exp-org>University of Edinburgh · School of Informatics</span>
<span class=exp-period>2017 — 2020</span></div><div class=exp-roles><div class=exp-role-row><span class=exp-role-title>Senior Postdoctoral Researcher</span>
<span class=exp-role-period>Aug 2020 — Dec 2020</span></div><div class=exp-role-row><span class=exp-role-title>Postdoctoral Researcher</span>
<span class=exp-role-period>2017 — Aug 2020</span></div></div><p class=exp-desc>As a postdoctoral researcher, I conducted independent and collaborative research in speech recognition, speaker diarization, and ultrasound tongue imaging. My work focused on developing machine learning solutions to assist speech therapists diagnose and treat speech sound disorders in children. I was awarded a Carnegie Trust research grant as Principal Investigator, studying automatic speech recognition from ultrasound images of the tongue.</p></article><article class=exp-item><div class=exp-header><span class=exp-org>Apple · Siri Speech · Cupertino, CA</span>
<span class=exp-period>2016</span></div><div class=exp-roles><div class=exp-role-row><span class=exp-role-title>Research Engineer Intern</span></div></div><p class=exp-desc>I improved the prosody of text-to-speech voices by modeling long-term intonation patterns.</p></article><article class=exp-item><div class=exp-header><span class=exp-org>Microsoft Language Development Center · Lisbon, Portugal</span>
<span class=exp-period>2007 — 2012</span></div><div class=exp-roles><div class=exp-role-row><span class=exp-role-title>Speech Scientist / Language Expert</span></div></div><p class=exp-desc>I developed text-to-speech voices for European and Brazilian Portuguese, language models for ASR, and led a project that delivered text normalization and inverse text normalization rules for 10 European languages.</p></article></div><button class=ghost-toggle type=button data-expand-toggle>Show More ↓</button></div></section><section id=education class=section-block><div class=section-heading><span class=bar></span><div><h2>Education</h2></div></div><div class=collapsible data-expandable data-limit=2><div class=edu-list data-expandable-content><article class=edu-item><div class=edu-header><span class=edu-degree>PhD, Speech and Language Processing</span>
<span class=edu-period>2013 — 2017</span></div><div class=edu-inst>University of Edinburgh · School of Informatics</div><div class=edu-thesis>Thesis:
<a href=https://era.ed.ac.uk/items/fe5bc86b-4802-4f8e-a11d-daf783b05717 target=_blank rel=noopener>Suprasegmental representations for the modeling of fundamental frequency in statistical parametric speech synthesis</a></div></article><article class=edu-item><div class=edu-header><span class=edu-degree>MSc, Speech and Language Processing</span>
<span class=edu-period>2012 — 2013</span></div><div class=edu-inst>University of Edinburgh · School of Psychology, Philosophy & Language Sciences</div><div class=edu-thesis>Dissertation:
<a href=https://era.ed.ac.uk/items/94455daa-a4cd-4ecd-a9d1-f1fa7a8b2b00 target=_blank rel=noopener>Exploring Discourse-Level Features for Audiobook-based Speech Synthesis</a></div></article><article class="edu-item edu-item--muted"><div class=edu-header><span class=edu-degree>BA & MA, Literature & Linguistics / English Studies</span>
<span class=edu-period>2003 — 2010</span></div><div class=edu-inst>University of Lisbon · Faculty of Letters</div><div class=edu-thesis>Dissertation:
<a href="https://www.proquest.com/openview/8c3a6eca12dbf139da6134cf13cf51a1/1?pq-origsite=gscholar&cbl=2026366&diss=y" target=_blank rel=noopener>Mirroring the Mind: Towards an analysis of the psychological space in Lolita by Vladimir Nabokov and its visual manifestations</a></div></article></div><button class=ghost-toggle type=button data-expand-toggle>Show More ↓</button></div></section><section id=research class=section-block><div class=section-heading><span class=bar></span><div><h2>Research</h2></div></div><p class=section-note>Selected publications. Full list on
<a href="https://scholar.google.com/citations?user=VdV_-40AAAAJ" target=_blank rel=noopener>Google Scholar</a> &
<a href=https://www.amazon.science/author/sam-ribeiro target=_blank rel=noopener>Amazon Science</a>.</p><div class=collapsible data-expandable data-limit=3><div class=pub-list data-expandable-content><article class=pub-item><div class=pub-year>2023</div><div class=pub-body><p class=pub-venue>Interspeech 2023</p><h3 class=pub-title>Improving Grapheme-to-Phoneme Conversion by Learning Pronunciations from Speech Recordings</h3><p class=pub-desc>A method to improve G2P conversion by leveraging pronunciation information extracted directly from speech recordings, improving performance on out-of-vocabulary and domain-specific words.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/2307.16643 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://www.amazon.science/publications/improving-grapheme-to-phoneme-conversion-by-learning-pronunciations-from-speech-recordings target=_blank rel=noopener>Amazon</a>
<a class=chip-link href=https://www.isca-archive.org/interspeech_2023/ribeiro23b_interspeech.pdf target=_blank rel=noopener>ISCA</a></div></div></article><article class=pub-item><div class=pub-year>2023</div><div class=pub-body><p class=pub-venue>Interspeech 2023</p><h3 class=pub-title>Comparing normalizing flows and diffusion models for prosody and acoustic modelling in text-to-speech</h3><p class=pub-desc>A comparison of flow-based, diffusion-based, and L1/L2 approaches for prosody and acoustic modelling in TTS. Flow-based models achieve the best spectrogram quality, while both diffusion and flow-based prosody predictors significantly outperform standard L2-trained models.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/2307.16679 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://www.amazon.science/publications/comparing-normalizing-flows-and-diffusion-models-for-prosody-and-acoustic-modelling-in-text-to-speech target=_blank rel=noopener>Amazon</a>
<a class=chip-link href=https://www.isca-archive.org/interspeech_2023/zhang23o_interspeech.pdf target=_blank rel=noopener>ISCA</a></div></div></article><article class=pub-item><div class=pub-year>2022</div><div class=pub-body><p class=pub-venue>ICASSP 2022</p><h3 class=pub-title>Cross-Speaker Style Transfer for Text-to-Speech Using Data Augmentation</h3><p class=pub-desc>A data augmentation framework for cross-speaker style transfer in neural TTS, enabling a system to adopt a target speaker's style without parallel style data.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/2202.05083 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://www.amazon.science/publications/cross-speaker-style-transfer-for-text-to-speech-using-data-augmentation target=_blank rel=noopener>Amazon</a>
<a class=chip-link href=https://ieeexplore.ieee.org/abstract/document/9746179/ target=_blank rel=noopener>IEEE</a></div></div></article><article class=pub-item><div class=pub-year>2021</div><div class=pub-body><p class=pub-venue>Speech Communication 2021</p><h3 class=pub-title>Exploiting ultrasound tongue imaging for the automatic detection of speech articulation errors</h3><p class=pub-desc>A system for automatic detection of speech sound disorders in children, combining audio and ultrasound modalities. Correctly identified 86.6% of articulation errors flagged by clinicians, with potential for integration into ultrasound-based therapy software for automated progress monitoring.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/2103.00324 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://www.research.ed.ac.uk/en/publications/exploiting-ultrasound-tongue-imaging-for-the-automatic-detection-/ target=_blank rel=noopener>Edinburgh Research</a>
<a class=chip-link href=https://doi.org/10.1016/j.specom.2021.02.001 target=_blank rel=noopener>DOI</a>
<a class=chip-link href=https://github.com/msamribeiro/ultrasound-speech-error-detection target=_blank rel=noopener>Code</a></div></div></article><article class=pub-item><div class=pub-year>2021</div><div class=pub-body><p class=pub-venue>SLT 2021 · Dataset</p><h3 class=pub-title>TaL: Tongue and Lips Corpus</h3><p class=pub-desc>Synchronised ultrasound tongue and lip video from 82 native English speakers. A large-scale resource for speaker-independent articulatory modelling and silent speech research.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/2011.09804 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://ultrasuite.github.io/data/tal_corpus/ target=_blank rel=noopener>Docs</a>
<a class=chip-link href=https://github.com/UltraSuite/tal-tools target=_blank rel=noopener>Code</a>
<a class=chip-link href=https://ultrasuite.github.io/data/tal_corpus/#download target=_blank rel=noopener>Data</a></div></div></article><article class=pub-item><div class=pub-year>2021</div><div class=pub-body><p class=pub-venue>Interspeech 2021</p><h3 class=pub-title>Silent versus modal multi-speaker speech recognition from ultrasound and video</h3><p class=pub-desc>Speaker-independent models for silent and modal speech recognition from ultrasound tongue imaging and lip video, demonstrating viability of silent speech recognition at scale.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/pdf/2103.00333 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://www.isca-archive.org/interspeech_2021/ribeiro21_interspeech.pdf target=_blank rel=noopener>ISCA</a></div></div></article><article class=pub-item><div class=pub-year>2019</div><div class=pub-body><p class=pub-venue>Research Grant (PI)</p><h3 class=pub-title>Silent Speech Interfaces for all - recognising speech from ultrasound images of the tongue</h3><p class=pub-desc>Carnegie Trust-funded research project investigating silent speech recognition from ultrasound tongue imaging. Produced multi-speaker corpora and models with applications for individuals with speech and communication disabilities.</p><div class=pub-links><a class=chip-link href=https://www.research.ed.ac.uk/en/projects/silent-speech-interfaces-for-all-recognising-speech-from-ultrasou/ target=_blank rel=noopener>Edinburgh Research</a></div></div></article><article class=pub-item><div class=pub-year>2019</div><div class=pub-body><p class=pub-venue>ICASSP 2019</p><h3 class=pub-title>Speaker-Independent Classification of Phonetic Segments from Raw Ultrasound in Child Speech</h3><p class=pub-desc>CNN-based classification of phonetic segments directly from raw ultrasound tongue images in child speech, supporting automatic speech therapy tools.</p><div class=pub-links><a class=chip-link href=https://arxiv.org/abs/1907.01413 target=_blank rel=noopener>ArXiv</a>
<a class=chip-link href=https://ieeexplore.ieee.org/abstract/document/8683564 target=_blank rel=noopener>IEEE</a>
<a class=chip-link href=https://www.research.ed.ac.uk/en/publications/speaker-independent-classification-of-phonetic-segments-from-raw-/ target=_blank rel=noopener>Edinburgh Research</a></div></div></article><article class=pub-item><div class=pub-year>2018</div><div class=pub-body><p class=pub-venue>Interspeech 2018 · Dataset</p><h3 class=pub-title>UltraSuite Repository</h3><p class=pub-desc>Synchronised ultrasound and acoustic data from child speech therapy sessions. Three datasets including children with speech sound disorders, for automatic clinical analysis tools.</p><div class=pub-links><a class=chip-link href=https://ultrasuite.github.io/papers/ultrasuite_IS18.pdf target=_blank rel=noopener>Paper</a>
<a class=chip-link href=https://ultrasuite.github.io target=_blank rel=noopener>Docs</a>
<a class=chip-link href=https://github.com/UltraSuite target=_blank rel=noopener>GitHub</a>
<a class=chip-link href=https://ultrasuite.github.io/download target=_blank rel=noopener>Data</a></div></div></article><article class=pub-item><div class=pub-year>2018</div><div class=pub-body><p class=pub-venue>Edinburgh Datashare 2018 · Dataset</p><h3 class=pub-title>Parallel Audiobook Corpus</h3><p class=pub-desc>~121 hours across 4 books and 59 speakers. Parallel readings designed for speech synthesis, voice conversion, and prosody modelling research.</p><div class=pub-links><a class=chip-link href=https://msamribeiro.github.io/parallel-corpus target=_blank rel=noopener>Docs</a>
<a class=chip-link href=https://datashare.is.ed.ac.uk/handle/10283/3217 target=_blank rel=noopener>Data</a></div></div></article><article class=pub-item><div class=pub-year>2018</div><div class=pub-body><p class=pub-venue>PhD Thesis</p><h3 class=pub-title>Suprasegmental representations for the modeling of fundamental frequency in statistical parametric speech synthesis</h3><p class=pub-desc>Novel representations of fundamental frequency for natural prosody generation in statistical parametric speech synthesis. Contributions include wavelet and cosine-based f0 representations, linguistic feature exploration, and hierarchical deep neural network models for TTS.</p><div class=pub-links><a class=chip-link href=https://era.ed.ac.uk/items/fe5bc86b-4802-4f8e-a11d-daf783b05717 target=_blank rel=noopener>Edinburgh Research</a></div></div></article><article class=pub-item><div class=pub-year>2015</div><div class=pub-body><p class=pub-venue>ICASSP 2015</p><h3 class=pub-title>A multi-level representation of f0 using the continuous wavelet transform and the discrete cosine transform</h3><p class=pub-desc>A compact f0 representation combining the Continuous Wavelet Transform and Discrete Cosine Transform, capturing prosodic variation across multiple scales of the prosodic hierarchy. Improves f0 prediction over traditional short-term approaches with fewer model parameters.</p><div class=pub-links><a class=chip-link href=https://ieeexplore.ieee.org/abstract/document/7178904 target=_blank rel=noopener>IEEE</a>
<a class=chip-link href=https://www.pure.ed.ac.uk/ws/files/57836585/ribeiro_and_clark_icassp15.pdf target=_blank rel=noopener>Edinburgh Research</a></div></div></article></div><button class=ghost-toggle type=button data-expand-toggle>Show More ↓</button></div></section><section id=writing class="writing-grid section-block"><div class=section-heading><span class=bar></span><div><h2>Blog</h2></div></div><div class=card-grid><a href=https://msamribeiro.com/blog/ class=category-card style=background-image:url(/images/writing-theme-v3.png)><div class=overlay><h3>Blog</h3><p>Sam's personal blog and learning notes.</p></div></a></div></section></main><footer class=site-footer><div class=footer-container>© 2026 Manuel Sam Ribeiro • Powered by
<a href=https://gohugo.io/ target=_blank rel=noopener>Hugo</a> &
<a href=https://github.com/msamribeiro/hugo-celadon target=_blank rel=noopener>Celadon</a></div></footer></div><script>(function(){const e="morandi-theme",t=document.body,n=()=>document.querySelectorAll("[data-theme-toggle]"),s=e=>{const s=e==="dark"?"dark":"light";t.setAttribute("data-theme",s),n().forEach(e=>{const t=e.querySelector("[data-icon-sun]"),n=e.querySelector("[data-icon-moon]");e.setAttribute("aria-pressed",s==="dark"),t&&(t.style.opacity=s==="dark"?"0":"1"),n&&(n.style.opacity=s==="dark"?"1":"0")})},o=localStorage.getItem(e),i=window.matchMedia("(prefers-color-scheme: dark)").matches;s(o||(i?"dark":"light")),n().forEach(n=>{n.addEventListener("click",()=>{const n=t.getAttribute("data-theme")==="dark"?"light":"dark";s(n),localStorage.setItem(e,n)})})})(),function(){const e=Array.from(document.querySelectorAll("[data-expandable]"));if(!e.length)return;e.forEach(e=>{const n=parseInt(e.dataset.limit,10),s=e.querySelector("[data-expandable-content]"),t=e.querySelector("[data-expand-toggle]");if(!s||!Number.isFinite(n))return;const o=Array.from(s.children);if(o.length<=n){t&&t.remove();return}if(!t)return;e.dataset.state="collapsed",t.setAttribute("aria-expanded","false");const i=()=>{const s=e.dataset.state==="expanded";o.forEach((e,t)=>{const o=!s&&t>=n;e.classList.toggle("hidden",o)}),t.textContent=s?"Show Less ↑":"Show More ↓",t.setAttribute("aria-expanded",s.toString())};t.addEventListener("click",()=>{e.dataset.state=e.dataset.state==="expanded"?"collapsed":"expanded",i()}),i()})}()</script><button id=scroll-top aria-label="Go to Top">↑</button>
<script>(function(){const e=()=>{const t=()=>typeof window!="undefined"&&typeof window.renderMathInElement=="function"&&typeof window.katex!="undefined"&&(window.renderMathInElement(document.body,{delimiters:[{left:"$$",right:"$$",display:!0},{left:"$",right:"$",display:!1},{left:"\\(",right:"\\)",display:!1},{left:"\\[",right:"\\]",display:!0}],throwOnError:!1}),!0),e=()=>{t()||window.requestAnimationFrame(e)};e()};document.readyState==="loading"?document.addEventListener("DOMContentLoaded",e):e(),document.querySelectorAll(".highlight").forEach(function(e){const o=e.querySelector("td.lntd:last-child pre code");if(!o)return;const r=Array.from(o.classList).find(e=>e.startsWith("language-")),l=r?r.replace("language-","").toUpperCase():"CODE",n=document.createElement("div");n.className="code-frame";const s=document.createElement("div");s.className="code-frame__header";const i=document.createElement("span");i.className="code-frame__lang",i.textContent=l,s.appendChild(i);const t=document.createElement("button");t.className="copy-btn",t.ariaLabel="Copy code";const c=`
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.8" stroke-linecap="round" stroke-linejoin="round">
<rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect>
<path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path>
</svg>
`,d=`
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.8" stroke-linecap="round" stroke-linejoin="round">
<polyline points="20 6 9 17 4 12"></polyline>
</svg>
`;t.innerHTML=c,s.appendChild(t);const a=document.createElement("div");a.className="code-frame__body",n.appendChild(s),n.appendChild(a);const u=e.parentNode;e.classList.add("code-frame__table"),u.replaceChild(n,e),a.appendChild(e),t.addEventListener("click",function(){navigator.clipboard.writeText(o.innerText).then(function(){t.classList.add("copied"),t.innerHTML=d,setTimeout(function(){t.classList.remove("copied"),t.innerHTML=c},2e3)})})})})(),function(){const e=document.getElementById("scroll-top");if(!e)return;window.addEventListener("scroll",()=>{window.scrollY>300?e.classList.add("show-scroll"):e.classList.remove("show-scroll")}),e.addEventListener("click",()=>{window.scrollTo({top:0,behavior:"smooth"})})}()</script></body></html>