Automating dev.to Publishing with GitHub Actions



This content originally appeared on DEV Community and was authored by px4n

As someone who has been blogging on my personal Hugo site for years, I recently decided to start sharing my content on dev.to to reach a broader developer audience. However, I quickly realized that manually cross-posting content would be a tedious process. After publishing a technical article on my blog, I’d have to copy, paste, reformat, and manually sync it to dev.to. Not only would this be time-consuming, but it would also be error-prone and likely lead to inconsistencies between platforms.

Since this is actually my first post on dev.to, I thought it would be fitting to share how I automated the entire syndication process using GitHub Actions and Node.js (though I should mention upfront that Node.js isn’t my strongest language, so I’d welcome any feedback or suggestions for improvement!).

The Challenge

The main challenges:

  • Hugo uses shortcodes like {\{< image >\}} and {\{< code >\}} for enhanced functionality
  • dev.to uses its own liquid tags and doesn’t understand Hugo shortcodes
  • Both platforms handle metadata differently
  • Keeping canonical URLs consistent required careful URL generation logic
  • Managing orphaned articles when content gets deleted or renamed

I needed a solution that would:

  1. Automatically detect when blog posts change
  2. Transform Hugo-specific content to dev.to-compatible format
  3. Handle both creating new articles and updating existing ones
  4. Clean up orphaned articles that no longer exist in the source
  5. Provide comprehensive logging and error handling

The Solution: A Simple Sync Script

I built a Node.js script that integrates with GitHub Actions. While Node.js isn’t my go-to language, it seemed like the natural choice for this task given the excellent ecosystem around markdown parsing and HTTP APIs. Here’s how it works:

Content Detection

The script uses two modes:

  • Incremental sync: Only processes files changed in the latest commit (perfect for automatic deployments)
  • Full sync: Processes all markdown files and cleans up orphaned content (great for manual maintenance)
async function getChangedFiles() {
  if (process.env.FORCE_SYNC_ALL === "true") {
    // Process all markdown files in configurable content directory
    const contentDir = process.env.CONTENT_DIR || "content/";
    return findAllMarkdownFiles(contentDir);
  }

  // Only process changed files in content directory
  const contentDir = process.env.CONTENT_DIR || "content/";
  const gitOutput = execSync("git diff --name-only HEAD~1 HEAD", { encoding: "utf8" });
  return gitOutput.split("\n").filter(file => file.startsWith(contentDir) && file.endsWith(".md"));
}

Content Filtering

Not every blog post should go to dev.to. The script includes filtering logic with automatic tag sanitization for dev.to’s strict requirements:

function shouldSyncPost(frontMatter, filePath) {
  // Must explicitly opt-in with devto = true
  if (frontMatter.devto !== true) {
    return { sync: false, reason: "dev.to sync not enabled" };
  }

  // Skip drafts and private posts
  if (frontMatter.draft === true || frontMatter.visibility === "private") {
    return { sync: false, reason: "post not ready for publication" };
  }

  // Check directory path (configurable via CONTENT_DIR and allowedDirectories)
  const contentDir = process.env.CONTENT_DIR || "content/";
  const relativePath = filePath.replace(new RegExp(`^${contentDir}`), "");
  const allowedDirs = ["blog/", "articles/", "posts/"];

  if (!allowedDirs.some(dir => relativePath.startsWith(dir))) {
    return { sync: false, reason: "not in allowed directory" };
  }

  // dev.to has a maximum of 4 tags - warn if limiting
  if (frontMatter.tags && frontMatter.tags.length > 4) {
    console.warn(`Post has ${frontMatter.tags.length} tags, limiting to first 4 for dev.to`);
  }

  // No additional filtering needed - devto=true flag is sufficient
  return { sync: true, reason: "all validation checks passed" };
}

Hugo Shortcode Transformation

One of the trickiest parts was converting Hugo shortcodes to dev.to-compatible format:

function transformHugoShortcodes(content) {
  let transformed = content;

  // Transform {\{< image >\}} to markdown
  transformed = transformed.replace(/\{\{<\s*image\s+src="([^"]+)"\s+alt="([^"]*)"\s*.*>\}\}/g, (match, src, alt) => {
    const imageUrl = src.startsWith("/") ? `${process.env.HUGO_BASE_URL}${src}` : src;
    return `![${alt}](${imageUrl})`;
  });

  // Transform {\{< youtube >\}} to dev.to liquid tags
  transformed = transformed.replace(
    /\{\{<\s*youtube\s+([^>\s]+)\s*>\}\}/g,
    (match, videoId) => `{% youtube ${videoId} %}`
  );

  return transformed;
}

Canonical URL Generation

To maintain SEO consistency, the script automatically generates canonical URLs based on Hugo’s routing conventions:

function generateCanonicalUrl(filePath, frontMatter) {
  const baseUrl = process.env.HUGO_BASE_URL;

  // Extract relative path from configurable content directory
  const contentDir = process.env.CONTENT_DIR || "content/";
  const relativePath = filePath.replace(new RegExp(`^${contentDir}`), "");

  const fileInfo = path.parse(relativePath);
  const fileName = fileInfo.name;
  const dir = fileInfo.dir;

  // Handle multilingual files (e.g., my-post.en.md)
  const langMatch = fileName.match(/^(.+)\.([a-z]{2})$/);

  let slug, language;
  if (langMatch) {
    slug = langMatch[1];
    language = langMatch[2];
  } else {
    slug = fileName;
    language = null; // Default language (English)
  }

  // Build URL path based on actual file structure
  let urlPath;
  if (language && language !== "en") {
    urlPath = dir ? `/${language}/${dir}/${slug}/` : `/${language}/${slug}/`;
  } else {
    urlPath = dir ? `/${dir}/${slug}/` : `/${slug}/`;
  }

  return `${baseUrl}${urlPath}`;
}

Orphaned Article Cleanup

When I delete or rename blog posts, the corresponding dev.to articles become orphaned. The script handles this automatically:

async function cleanupOrphanedArticles(allHugoFiles, allDevToArticles) {
  const hugoArticles = new Set();

  // Collect all canonical URLs and titles from Hugo files
  for (const file of allHugoFiles) {
    const parsed = parseHugoFile(file);
    if (shouldSyncPost(parsed.attributes, file).sync) {
      hugoArticles.add(parsed.attributes.canonical_url);
      hugoArticles.add(parsed.attributes.title);
    }
  }

  // Find orphaned articles on dev.to
  const orphanedArticles = allDevToArticles.filter(article => {
    const hasCanonicalMatch = hugoArticles.has(article.canonical_url);
    const hasTitleMatch = hugoArticles.has(article.title);
    return !hasCanonicalMatch && !hasTitleMatch;
  });

  // Unpublish orphaned articles
  for (const article of orphanedArticles) {
    if (process.env.AUTO_DELETE_DEVTO === "true") {
      await unpublishArticle(article);
    }
  }
}

GitHub Actions Integration

The script integrates into three different workflows:

1. Automatic Deployment (develop branch)

- name: Sync Hugo Site to dev.to
  run: |
    npm install axios front-matter
    node .github/workflows/scripts/sync-devto.js
  env:
    DEVTO_API_KEY: ${{ secrets.DEVTO_API_KEY }}
    HUGO_BASE_URL: https://yoursite.com
    CONTENT_DIR: content/
    DEBUG_LEVEL: 2
    FORCE_SYNC_ALL: false
    AUTO_DELETE_DEVTO: true

2. Manual Deployment (with optional sync)

- name: Sync Hugo Site to dev.to
  if: github.event.inputs.sync_devto == 'true'
  run: |
    npm install axios front-matter
    node .github/workflows/scripts/sync-devto.js
  env:
    CONTENT_DIR: content/
    DEBUG_LEVEL: 4 # More verbose for manual runs
    FORCE_SYNC_ALL: true

3. Content Validation (CI)

The CI workflow includes validation specifically for dev.to posts:

- name: Validate dev.to Posts
  run: |
    CONTENT_DIR=${CONTENT_DIR:-content/}
    find "$CONTENT_DIR" -name "*.md" | while read post; do
      if grep -q "devto.*=.*true" "$post"; then
        # Check for required fields
        if ! grep -q "title.*=" "$post"; then
          echo "Missing title in $post"
        fi
        if ! grep -q "tags.*=" "$post"; then
          echo "Missing tags in $post"
        fi
        # Check tag count (dev.to maximum is 4)
        tag_count=$(grep -o "tags.*=.*\[.*\]" "$post" | grep -o '"[^"]*"' | wc -l)
        if [ "$tag_count" -gt 4 ]; then
          echo "Warning: $post has $tag_count tags, dev.to allows maximum 4"
        fi
      fi
    done

Real-World Results

Since implementing this system, my content workflow has improved:

  • No more manual copying: The script handles the content transformation automatically
  • Consistent formatting: Hugo shortcodes get converted properly
  • Canonical URLs work: The URL generation logic maintains SEO consistency
  • Orphaned cleanup: Deleted posts get unpublished from dev.to automatically

So far the system works reliably for my use case. The logging helps when debugging issues, and the validation catches most problems before they reach dev.to.

Configuration Options

The script supports several environment variables for customization:

  • CONTENT_DIR: Directory containing your markdown files (default: content/)
  • DEVTO_API_KEY: Your dev.to API key (required)
  • HUGO_BASE_URL: Base URL for canonical links (required)
  • FORCE_SYNC_ALL: Process all files vs only changed ones (default: false)
  • AUTO_DELETE_DEVTO: Auto-cleanup orphaned articles (default: false)
  • DEBUG_LEVEL: Logging verbosity 0-4 (default: 2)
  • Tag sanitization: Automatically handles dev.to’s alphanumeric-only requirement

Directory Structure Support

The script works with any Hugo content structure:

  • content/posts/ (traditional)
  • content/blog/ (common alternative)
  • content/articles/ (another common pattern)
  • content/tech/ or any custom directory

Simply set CONTENT_DIR=content/ and configure the allowed subdirectories in the script.

Tag Limitations

dev.to has a maximum of 4 tags per article. The script automatically:

  • Takes the first 4 tags from your Hugo front matter
  • Logs a warning when limiting tags
  • Preserves all tags in your Hugo site

dev.to Tag Restrictions

dev.to has strict tag requirements that the script handles automatically:

  • Alphanumeric only: Tags can only contain letters and numbers
  • No special characters: Hyphens, periods, spaces are automatically removed
  • Maximum length: 30 characters per tag
  • Maximum count: 4 tags per article

The script automatically sanitizes tags with full transparency:

function sanitizeTagsForDevTo(tags) {
  return tags
    .map(tag => {
      let sanitized = String(tag)
        .toLowerCase()
        .replace(/[^a-z0-9]/g, "") // Remove non-alphanumeric
        .substring(0, 30);

      if (sanitized !== tag.toLowerCase()) {
        console.log(`Tag transformed: "${tag}" → "${sanitized}"`);
      }

      return sanitized;
    })
    .filter(tag => tag.length > 0);
}

Example transformations:

  • github-actionsgithubactions
  • dev.todevto
  • ci-cdcicd
  • content-syndicationcontentsyndication

This means you can keep your original Hugo tags descriptive and readable – the script handles dev.to compatibility automatically.

Future Improvements

While the current system works great, there are several enhancements I’m considering:

Enhanced Content Transformation

  • Table conversion: Better handling of Hugo table shortcodes
  • Math notation: Support for LaTeX math expressions
  • Interactive elements: Transform Hugo-specific interactive shortcodes

Smarter Sync Logic

  • Content diffing: Only update articles when content actually changes
  • Selective field updates: Update only specific fields (tags, title) without republishing
  • Scheduling: Support for delayed publishing based on Hugo’s date fields

Multi-Platform Support

  • Hashnode integration: Extend to other developer blogging platforms
  • Medium support: Though their API is more limited

Analytics Integration

  • Sync tracking: Monitor which posts perform better on which platforms
  • Engagement metrics: Track cross-platform engagement and adjust strategy accordingly

Better Configuration Management

  • Environment validation: Better error messages for missing configuration
  • Config file support: YAML/JSON configuration files as an alternative to environment variables
  • Directory auto-detection: Automatically detect Hugo content structure

Learning Along the Way

Since Node.js isn’t my strongest language, this project was a good learning experience. A few things I discovered:

  • The front-matter npm package made parsing Hugo’s TOML front matter much easier than trying to roll my own parser
  • Handling async/await properly took some getting used to, especially when processing multiple files in sequence
  • The regex patterns for shortcode transformation were trickier than expected – there are probably more elegant ways to handle this

Conclusion

If anyone has suggestions for improving the Node.js code, I’d love to hear them.

If you’re interested in the implementation details, you can find the complete sync script here: GitHub Repository


This content originally appeared on DEV Community and was authored by px4n