Skip to content

2025

References in GitHub workflows and composite actions

Enabling a GitHub composite action to call another action in the same repo.

This can be done with reusable workflows, but not yet composite actions: github.blog/changelog/2022-01-25-github-actions-reusable-workflows-can-be-referenced-locally

The problem

When running a composite action (even in a reusable workflow), I want to be able to use the same GitHub ref for the actions that exist within the remote repo. For instance, I have an actions monorepo and would like to have some of my composite actions be able to call another action within the actions monorepo using the version (ref) that the action is using.

For example, the check-semver-labels action calls the label-checker action:

    - name: Check for semantic version labels
      uses: rwaight/actions/github/label-checker@main  # can use version specific or main
      #uses: rwaight/actions/github/label-checker@v1
      id: semver-labels-check
      with:
        prefix_mode: true
        one_of: ${{ inputs.semver-prefix }}
        #none_of: "skip-changelog"
        allow_failure: ${{ inputs.allow-failure }}
        repo_token: ${{ inputs.gh-token }}

References

Potential solutions

Here are some potential solutions:

Updating JSON objects with JQ

Parsing entires in a JSON array

Given the following array containing directory paths:

[
    "/home/user/project/file1.txt",
    "/home/user/project/subdir/file2.log"
]

Now stored as the paths variable:

paths='[
    "/home/user/project/file1.txt",
    "/home/user/project/subdir/file2.log"
]'

Removing the prefix

Removing the prefix with gsub

We can use gsub to remove the /home/user/ directory prefix:

gsub_no_prefix=$(jq -c '.[] |= gsub("^/home/user/";"")' <<< "$paths")

Which produces:

~ echo $gsub_no_prefix
["project/file1.txt","project/subdir/file2.log"]
Removing the prefix with JQ map

We can also use JQ map to remove the /home/user/ directory prefix:

map_no_prefix=$(echo "$paths" |
  jq -c 'map(gsub("^/home/user/";""))'
)

Which produces:

~ echo $map_no_prefix
["project/file1.txt","project/subdir/file2.log"]

Selecting entries based on a pattern

Selecting entries that match a pattern

Use select(test(...)) to create a new array containing only paths that end in .txt:

dot_txt=$(jq -c 'map(select(test("\\.txt$")))' <<< "$paths")

Which produces:

~ echo $dot_txt
["/home/user/project/file1.txt"]
Selecting entries that do not match a pattern

Use select(test(... | not)) to create a new array excluding paths that end in .log:

no_dot_log=$(jq -c 'map(select(test("\\.log$") | not))' <<< "$paths")

Which produces:

~ echo $no_dot_log
["/home/user/project/file1.txt"]

Putting it all together

We now have an updated array stored as the paths variable:

#!/usr/bin/env bash

# updated array
paths='[
    "/home/user/project/file1.txt",
    "/home/user/project/subdir/file2.log",
    "/home/user/data/foo.txt",
    "/home/user/data/bar.log",
    "/home/user/data/baz.txt"
]'

# 1) remove the '/home/user/' prefix
no_prefix=$(jq -c '.[] |= gsub("^/home/user/";"")' <<< "$paths")

# 2) keep only '.txt' files
only_txt_files=$(jq -c 'map(select(test("\\.txt$")))' <<< "$no_prefix")

# 3) exclude '.txt' files
no_txt_files=$(jq -c 'map(select(test("\\.txt$") | not))' <<< "$no_prefix")

# 4) print the results
echo "Only '.txt' files: "
echo "  ${only_txt_files}"
echo ""
echo "Exclude '.txt' files: "
echo "  ${no_txt_files}"

Which produces:

Only '.txt' files:
  ["project/file1.txt","data/foo.txt","data/baz.txt"]

Exclude '.txt' files:
  ["project/subdir/file2.log","data/bar.log"]

Updating the JSON object

Updated JSON array

Now we have a more unique set of data in our JSON array

Given the following array containing directory paths:

[
    "/home/user/projects/my-test-abc/foo.txt",
    "/home/user/projects/my-test-abc/bar.log",
    "/home/user/projects/dev-jkl/foo.txt",
    "/home/user/projects/dev-jkl/bar.log",
    "/home/user/projects/dev-jkl/baz.log",
    "/home/user/projects/primary-test-project-xyz/foo.txt",
    "/home/user/projects/primary-test-project-xyz/bar.log",
    "/home/user/projects/primary-test-project-xyz/baz.txt"
]

Now stored as the paths variable:

paths='[
    "/home/user/projects/my-test-abc/foo.txt",
    "/home/user/projects/my-test-abc/bar.log",
    "/home/user/projects/dev-jkl/foo.txt",
    "/home/user/projects/dev-jkl/bar.log",
    "/home/user/projects/dev-jkl/baz.log",
    "/home/user/projects/primary-test-project-xyz/foo.txt",
    "/home/user/projects/primary-test-project-xyz/bar.log",
    "/home/user/projects/primary-test-project-xyz/baz.txt"
]'

The planned output

I want the output to be usable by a GitHub actions matrix, so it should be:

{
    "projects": [
        "test-abc",
        "test-jkl",
        "test-xyz"
    ],
    "include": [
        {
            "project": "test-abc",
            "files": [
                "my-test-abc/foo.txt",
                "my-test-abc/bar.log"
            ]
        },
        {
            "project": "dev-jkl",
            "files": [
                "dev-jkl/foo.txt",
                "dev-jkl/bar.log",
                "dev-jkl/baz.log"
            ]
        },
        {
            "project": "primary-test-project-xyz",
            "files": [
                "primary-test-project-xyz/foo.txt",
                "primary-test-project-xyz/bar.log",
                "primary-test-project-xyz/baz.txt"
            ]
        }
    ]
}

Building the new JSON object

jq -c '
  # 1) Strip off the the '/home/user/projects' directory prefix
  map(sub("^/home/user/projects/";""))

  # 2) Turn each element into an object with 'project' and 'file' keys
  | map({ project: (split("/")[0]), file: . })

  # 3) (Optional) Sort by project so group_by will work predictably
  | sort_by(.project)

  # 4) Group into arrays by project name
  | group_by(.project)

  # 5) Build the final output object:
  | {
      projects:   map(.[0].project),                # a simple list of project names
      include:    map({
                     project: .[0].project,         # the project name
                     files:   map(.file)            # all files in that project
                 })
    }
' <<<"$paths"

Which produces:

{"projects":["dev-jkl","my-test-abc","primary-test-project-xyz"],"include":[{"project":"dev-jkl","files":["dev-jkl/foo.txt","dev-jkl/bar.log","dev-jkl/baz.log"]},{"project":"my-test-abc","files":["my-test-abc/foo.txt","my-test-abc/bar.log"]},{"project":"primary-test-project-xyz","files":["primary-test-project-xyz/foo.txt","primary-test-project-xyz/bar.log","primary-test-project-xyz/baz.txt"]}]}
About the key filters

Some information about the key filters:

  • map(sub("^/home/user/projects/";""))

    • Removes the fixed leading path so you are left with "my-test-abc/foo.txt", etc.
  • map({ project: (split("/")[0]), file: . })

    • Splits each string on / and uses the first segment as project, the whole string as file.
  • sort_by(.project) | group_by(.project)

    • Ensures identical projects are adjacent, then buckets them into arrays.
  • Building the output object

    • projects becomes a flat array of each group’s name.
    • include is an array of { project, files } objects.
    {
      projects: map(.[0].project), # (1)!
      include:  map({ # (2)!
                  project: .[0].project,
                  files:   map(.file)
                })
    }
    
    1. projects becomes a flat array of each group’s name.
    2. include is an array of { project, files } objects.

Storing the output as a variable

matrix=$(jq -c '
  map(sub("^/home/user/projects/";""))
  | map({ project: (split("/")[0]), file: . })
  | sort_by(.project)
  | group_by(.project)
  | {
      projects: map(.[0].project),
      include:  map({ project: .[0].project, files: map(.file) })
    }
' <<<"$paths")

Which produces:

~ echo $matrix
{"projects":["dev-jkl","my-test-abc","primary-test-project-xyz"],"include":[{"project":"dev-jkl","files":["dev-jkl/foo.txt","dev-jkl/bar.log","dev-jkl/baz.log"]},{"project":"my-test-abc","files":["my-test-abc/foo.txt","my-test-abc/bar.log"]},{"project":"primary-test-project-xyz","files":["primary-test-project-xyz/foo.txt","primary-test-project-xyz/bar.log","primary-test-project-xyz/baz.txt"]}]}

or if you want it more readable issue echo $matrix | jq:

~ echo $matrix | jq
{
  "projects": [
    "dev-jkl",
    "my-test-abc",
    "primary-test-project-xyz"
  ],
  "include": [
    {
      "project": "dev-jkl",
      "files": [
        "dev-jkl/foo.txt",
        "dev-jkl/bar.log",
        "dev-jkl/baz.log"
      ]
    },
    {
      "project": "my-test-abc",
      "files": [
        "my-test-abc/foo.txt",
        "my-test-abc/bar.log"
      ]
    },
    {
      "project": "primary-test-project-xyz",
      "files": [
        "primary-test-project-xyz/foo.txt",
        "primary-test-project-xyz/bar.log",
        "primary-test-project-xyz/baz.txt"
      ]
    }
  ]
}

Other changes or improvements

The following changes or improvements can be made:

  • The sort order (e.g. alphabetical) can be updated by changing or removing sort_by(.project)
  • We can rename the project slugs by inserting a | gsub("^(my-|dev-|primary-test-project-)";"") on .project
    • This could be helpful if we need to drop prefixes
    • Would probably make sense to parse the directory name prefix to a variable like env

Validating the paths variable

If there is a situtation where the paths variable is set to a single path that is not formatted properly as an array, we can check it and fix it using jq:

Checking the paths variable:

We can evaluate the 'type' using jq:

jq -e 'type == "array"' <<<"$paths"

Fixing the paths variable:

We can evaludate the 'type' using jq:

jq -nc --arg p "$paths" '[$p]'

Putting it together:

paths='/home/user/projects/my-test-abc/foo.txt'

# 1) if it's not already a JSON array, wrap it in one
if ! jq -e 'type == "array"' <<<"$paths" >/dev/null 2>&1; then
    paths=$(jq -nc --arg p "$paths" '[$p]')
    #
    # now $paths is guaranteed to be a JSON array
    echo "Normalized paths: $paths"
    # β†’ ["\/home\/user\/projects\/my-test-abc\/foo.txt"]
fi

Multi-line strings in YAML

Content Source

This content is from this wonderful answer from stackoverflow.

There are 5 6 NINE (or 63*, depending how you count) different ways to write multi-line strings in YAML.

TL;DR

  • Use > if you want to break a string up for readability but for it to still be treated as a single-line string: interior line breaks will be stripped out, there will only be one line break at the end:
        key: >
          Your long
          string here.
  • Use | if you want those line breaks to be preserved as \n (for instance, embedded markdown with paragraphs).
        key: |
          ### Heading

          * Bullet
          * Points
  • Use >- or |- instead if you don't want a line break appended at the end.

  • Use "" if you need to split lines in the middle of words or want to literally type line breaks as \n:

        key: "Antidisestab\
         lishmentarianism.\n\nGet on it."
  • YAML is crazy.

Block scalar styles (>, |)

These allow characters such as \ and " without escaping, and add a new line (\n) to the end of your string.

> Folded style removes single newlines within the string (but adds one at the end, and converts double newlines to singles):

    Key: >
      this is my very very very
      long string

β†’ this is my very very very long string\n

Extra leading space is retained and causes extra newlines. See note below.

Advice: Use this. Usually this is what you want.

| Literal style turns every newline within the string into a literal newline, and adds one at the end:

    Key: |
      this is my very very very 
      long string

β†’ this is my very very very\nlong string\n

Here's the official definition from the YAML Spec 1.2.2

Scalar content can be written in block notation, using a literal style (indicated by β€œ|”) where all line breaks are significant. Alternatively, they can be written with the folded style (denoted by β€œ>”) where each line break is folded to a space unless it ends an empty or a more-indented line.

Advice: Use this for inserting formatted text (especially Markdown) as a value.

Block styles with block chomping indicator (>-, |-, >+, |+)

You can control the handling of the final new line in the string, and any trailing blank lines (\n\n) by adding a block chomping indicator character:

  • >, |: "clip": keep the line feed, remove the trailing blank lines.
  • >-, |-: "strip": remove the line feed, remove the trailing blank lines.
  • >+, |+: "keep": keep the line feed, keep trailing blank lines.

"Flow" scalar styles (, ", ')

These have limited escaping, and construct a single-line string with no new line characters. They can begin on the same line as the key, or with additional newlines first, which are stripped. Doubled newline characters become one newline.

plain style (no escaping, no # or : combinations, first character can't be ", ' or many other punctuation characters ):

    Key: this is my very very very 
      long string

Advice: Avoid. May look convenient, but you're liable to shoot yourself in the foot by accidentally using forbidden punctuation and triggering a syntax error.

double-quoted style (\ and " must be escaped by \, newlines can be inserted with a literal \n sequence, lines can be concatenated without spaces with trailing \):

    Key: "this is my very very \"very\" loooo\
      ng string.\n\nLove, YAML."

β†’ "this is my very very \"very\" loooong string.\n\nLove, YAML."

Advice: Use in very specific situations. This is the only way you can break a very long token (like a URL) across lines without adding spaces. And maybe adding newlines mid-line is conceivably useful.

single-quoted style (literal ' must be doubled, no special characters, possibly useful for expressing strings starting with double quotes):

    Key: 'this is my very very "very"
      long string, isn''t it.'

β†’ "this is my very very \"very\" long string, isn't it."

Advice: Avoid. Very few benefits, mostly inconvenience.

Block styles with indentation indicators

Just in case the above isn't enough for you, you can add a "block indentation indicator" (after your block chomping indicator, if you have one):

    - >8
            My long string
            starts over here
    - |+1
     This one
     starts here

Note: Leading spaces in Folded style (>)

If you insert extra spaces at the start of not-the-first lines in Folded style, they will be kept, with a bonus newline. (This doesn't happen with flow styles.) Section 6.5 says:

In addition, folding does not apply to line breaks surrounding text lines that contain leading white space. Note that such a more-indented line may consist only of such leading white space.

    - >
        my long
          string

        many spaces above
    - my long
          string

        many spaces above

β†’ ["my long\n string\n \nmany spaces above\n","my long string\nmany spaces above"]

Summary

In this table: _ means space character, \n means "newline character" except were noted. "Leading space" refers to an additional space character on the second line, when the first is only spaces (which establishes the indent).

> | >- |- >+ |+ " '
Spaces/newlines converted to:
Trailing space β†’ _ _ _ _ _ _
Leading space β†’ \n_ \n_ \n_ \n_ \n_ \n_
Single newline β†’ _ \n _ \n _ \n _ _ _
Double newline β†’ \n \n\n \n \n\n \n \n\n \n \n \n
Final newline β†’ \n \n \n \n
Final double newline β†’ \n \n \n\n \n\n
How to create a literal:
Single quote ' ' ' ' ' ' ' ' ''
Double quote " " " " " " " \" "
Backslash \ \ \ \ \ \ \ \\ \
Other features
In-line newlines with literal \n 🚫 🚫 🚫 🚫 🚫 🚫 🚫 βœ… 🚫
Spaceless newlines with \ 🚫 🚫 🚫 🚫 🚫 🚫 🚫 βœ… 🚫
# or : in value βœ… βœ… βœ… βœ… βœ… βœ… 🚫 βœ… βœ…
Can start on same
line as key 🚫 🚫 🚫 🚫 🚫 🚫 βœ… βœ… βœ…

Examples

Note the trailing spaces on the line before "spaces."

    - >
      very "long"
      'string' with

      paragraph gap, \n and        
      spaces.
    - | 
      very "long"
      'string' with

      paragraph gap, \n and        
      spaces.
    - very "long"
      'string' with

      paragraph gap, \n and        
      spaces.
    - "very \"long\"
      'string' with

      paragraph gap, \n and        
      s\
      p\
      a\
      c\
      e\
      s."
    - 'very "long"
      ''string'' with

      paragraph gap, \n and        
      spaces.'
    - >- 
      very "long"
      'string' with

      paragraph gap, \n and        
      spaces.

    [
      "very \"long\" 'string' with\nparagraph gap, \\n and         spaces.\n", 
      "very \"long\"\n'string' with\n\nparagraph gap, \\n and        \nspaces.\n", 
      "very \"long\" 'string' with\nparagraph gap, \\n and spaces.", 
      "very \"long\" 'string' with\nparagraph gap, \n and spaces.", 
      "very \"long\" 'string' with\nparagraph gap, \\n and spaces.", 
      "very \"long\" 'string' with\nparagraph gap, \\n and         spaces."
    ]

*2 block styles, each with 2 possible block chomping indicators (or none), and with 9 possible indentation indicators (or none), 1 plain style and 2 quoted styles: 2 x (2 + 1) x (9 + 1) + 1 + 2 = 63

Some of this information has also been summarised here.

Which doc should I read?

Writing documentation is good, but only if it is useful for the intended audience. As I continue to build out this wiki, I thought it would be helpful to provide additional context about the types of documents that are here.

Reference Docs vs. Guides

Reference docs explain what a tool is and why it's used, while guides provide step-by-step instructions for completing specific tasks once you're ready to take action.

Reference Docs

Reference Docs provide high-level information and context about an application or tool. They explain what the tool is, its purpose, and how it fits into the broader system or workflow. These docs are ideal for users who are trying to understand or learn about the tool at a conceptual level.

Guides

Guides focus on action-oriented, step-by-step instructions to accomplish specific tasks. They assume the reader already understands the basics of the tool from the overview docs and is ready to perform a particular operation or workflow.

Which doc should I read?

When to use each?

Start with the reference docs to gain foundational knowledge about an application or tool. Then, refer to a guide when you need detailed steps to complete a specific task.

Guides vs Reference docs

Reference docs explain what a tool is and why it's used, while guides provide step-by-step instructions for completing specific tasks once you're ready to take action.

Munchkin Short Rules

The "Munchkin Short Rules" page is a reference doc.

A visual way to decide which doc to read

If you are a visual learner, here is a visual way to decide which doc to read:

flowchart TD

    A(Do you need to learn<br/>about a tool?)
    O[Read the Reference Docs]
    A -->|Yes| O

    F(Ready to complete a task?)
    G[Follow a Guide]

    O --> F
    F -->|Yes| G
    A -->|No| F
    F -->|No| X[Explore other resources]

    style G stroke:#800080,stroke-width:3px
    style O stroke:#008080,stroke-width:3px

Understanding Agile Work Hierarchy: Stories, Epics, and Initiatives

In distributed teams, having a shared understanding of Agile terminology is essential to working effectively. This post explains the hierarchy of Agile work items and how they fit together to support clarity, alignment, and execution.

🧩 Work Item Hierarchy Overview

graph TD
    %% the preview is rendering the bottom
    %% subgraph first, so switching them here
    subgraph Initiative 2
        I2E1[Epic]
        I2E2[Epic]
        I2S1["Story: 'What?' & 'Why?'"]
        I2S2["Story: 'What?' & 'Why?'"]
        I2T1["Task: 'How?'"]
        I2T2["Task: 'How?'"]
        I2T3["Task: 'How?'"]
        I2T4["Task: 'How?'"]
        I2E1 --> I2S1
        I2E2 --> I2S2
        I2S1 --> I2T1
        I2S1 --> I2T2
        I2S2 --> I2T3
        I2S2 --> I2T4
    end

    subgraph Initiative 1
        I1E1[Epic]
        I1E2[Epic]
        I1S1["Story: 'What?' & 'Why?'"]
        I1S2["Story: 'What?' & 'Why?'"]
        I1T1["Task: 'How?'"]
        I1T2["Task: 'How?'"]
        I1T3["Task: 'How?'"]
        I1T4["Task: 'How?'"]
        I1E1 --> I1S1
        I1E2 --> I1S2
        I1S1 --> I1T1
        I1S1 --> I1T2
        I1S2 --> I1T3
        I1S2 --> I1T4
    end

    Initiative1[Initiative 1] --> I1E1
    Initiative1 --> I1E2
    Initiative2[Initiative 2] --> I2E1
    Initiative2 --> I2E2

πŸ“ Definitions

Level Description
🧱 Task The "how?" – implementation-level steps proving stories are fulfilled.
πŸ“— Story The "what?" and "why?" – user-centric requirements with criteria.
πŸ“˜ Epic A collection of related stories forming a larger feature.
πŸ—‚ Initiative Strategic objective spanning multiple epics.

Quick Analogy:

Stories = Requirements. Tasks = Implementation.

🌍 Why It Matters for Distributed Teams

  • ✍️ Shared Vocabulary: Avoids confusion across locations.
  • 🎯 Goal Alignment: Connects daily work to strategic initiatives.
  • πŸ” Traceability: Tasks trace back to story requirements.

For a deeper, evolving reference guide, see the Agile Work Hierarchy Reference page.

Agile Story vs Task

Note

This is meant to be way to help understand and develop a process for tracking work in a distributed team.

Agile Story vs Task

In Agile frameworks like Scrumban, understanding the distinction between a task and a story is crucial for effective project management. A user story is typically functionality that will be visible to end users and captures requirements and acceptance criteria from the user's perspective. Developing a user story usually involves multiple roles such as a programmer, tester, user interface designer, or analyst, indicating that it contains multiple types of work. 03

On the other hand, a task is a unit of work that is generally worked on by one person and is restricted to a single type of work. Tasks are implementation activities designed to prove that the requirements and acceptance criteria of user stories have been met. They are often technical in nature, such as implementing a class, setting up a virtual machine, writing a script, or conducting UI testing. 02

In the context of Jira, a popular tool for Agile project management, a Story is a more specific version of a Task. Both are work requests, but a Story was created to help people tracking User Stories in Jira. Tasks in Jira are considered "units" of work and are often used for activities that are not testable, whereas stories are for functionalities that can be tested and potentially shipped. 04

Scrumban, a hybrid of Scrum and Kanban, utilizes both concepts but emphasizes visualizing work, limiting work in progress, and maximizing efficiency. In Scrumban, tasks are represented as cards on a Kanban board, moving through different stages of the process, while stories are part of the product backlog and are prioritized based on complexity and product demand. 07

The key to distinguishing between a story and a task lies in understanding that stories bring functionality and value that is recognizable to the user, while tasks are the steps taken by developers to realize that functionality. 02 This distinction helps in organizing work in a way that aligns with both the user's needs and the team's capacity to deliver. 08

Work tracking in a distributed team

Note

This is meant to be a visual overview of how to manage issues as part of an overall work tracking process.

As mentioned in the Using Scrumban in a distributed team post, using Sprints to plan and define the work that will be completed can be extremely helpful in a distributed team. There should be a formally established process to follow in order to help everyone understand expectations. In this scenario, we can find out how we can use GitHub issues to plan and track work which can be helpful given the recent changes the GitHub team is making to issues and projects.

Agile Project Management Terminology

What are stories, epics, and initiatives? (from atlassian.com)

  • Stories, also called β€œuser stories,” are short requirements or requests written from the perspective of an end user.
  • Epics are large bodies of work that can be broken down into a number of smaller tasks (called stories).
  • Initiatives are collections of epics that drive toward a common goal.
flowchart TD
    %% the preview is rendering the bottom
    %% subgraph first, so switching them here
    subgraph Initiative 2
        I2E1[Epic C]
        I2E2[Epic D]
        I2S1[Story C1]
        I2S2[Story D1]
        I2E1 --> I2S1
        I2E2 --> I2S2
    end

    subgraph Initiative 1
        I1E1[Epic A]
        I1E2[Epic B]
        I1S1[Story A1]
        I1S2[Story B1]
        I1E1 --> I1S1
        I1E2 --> I1S2
    end

    Initiative1[Initiative 1] --> I1E1
    Initiative1 --> I1E2
    Initiative2[Initiative 2] --> I2E1
    Initiative2 --> I2E2

Including Tasks to use with GitHub Projects

We can build on the Agile project management terminology by adding tasks as a subset of either a story or an epic. The differences are explained more in the Agile Story vs Task post and the Understanding Agile Work Hierarchy post.

For a deeper, evolving reference guide, see the Agile Work Hierarchy Reference page.

Legends in mermaid diagrams

It might be helpful to add a legend to graphs using Mermaid.

Support for Legends in a Graph

Here are some examples to test workarounds for legends in a graph from mermaid-js/mermaid#2110

Legends in a Graph Example 1

This is from mermaid-js/mermaid#2110 comment 1057895108:

flowchart LR
    TF1<--->|hand off|TF2
    subgraph Translators
        direction LR
        TF2[Translation Files]<-->Pipeline
        subgraph Pipeline
            direction TB
            create-->review
            review-->approve
            approve-->maintain
            maintain-->review
        end
    end
    subgraph Developers
    TF1[Translation Files]-->|reference|Code
    Code-->|extract|TF1
    end
    subgraph Legend
      direction LR
      start1[ ] --->|fully automatable| stop1[ ]
      style start1 height:0px;
      style stop1 height:0px;
      start2[ ] --->|highly automatable| stop2[ ]
      style start2 height:0px;
      style stop2 height:0px; 
    end
    linkStyle 0 stroke:red;
    linkStyle 2 stroke:red;
    linkStyle 3 stroke:orange;
    linkStyle 4 stroke:orange;
    linkStyle 5 stroke:orange;
    linkStyle 6 stroke:red;
    linkStyle 7 stroke:red;
    linkStyle 8 stroke:red;
    linkStyle 9 stroke:orange;

Legends in a Graph Example 2

This is from mermaid-js/mermaid#2110 comment 2696764562:

The diagram was commented out due to an error...

Blog post template

This is a template file for blog posts in MkDocs. In order to keep it simple to create new posts, the template file should have the following:

Page configuration

When creating a new blog post, determine the following page configuration options:

Page metadata

When creating a new blog post, determine the following page metadata:

Putting it all together

Front-matter

The front-matter consists of YAML Style Meta-Data to define the page configuration settings:

---
# page configuration
title: Blog post template
description: >
  This is a template file for blog posts in MkDocs.
# icon: octicons/repo-template-24
# https://squidfunk.github.io/mkdocs-material/reference#setting-the-page-icon
status: new
# page metadata
draft: true
date:
  created: 2025-02-18
  updated: 2025-02-18
authors:
  - rwaight
categories:
  - MkDocs
slug: blog-post-template
tags:
  - MkDocs
  - Template
---

Example section

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

Personal time tracking

Notes about self-tracking time spent on different projects each week using different tools.

Review Slack messages sent during the week

Within Slack, you can search for messages you have sent using the following:

  • from:@SlackUsername
  • before:2025-02-08
  • after:2025-02-02

The actual query would be:

from:@SlackUsername before:2025-02-08 after:2025-02-02

Review GitHub commits during the week

GitHub Docs - Searching commits

Get a list of commits by author or committer:

# using the 'author'
https://github.com/search?q=author%3Agithubusername&type=commits&s=committer-date&o=desc&p=1
# using the 'committer'
https://github.com/search?q=committer%3Agithubusername&type=commits&s=committer-date&o=desc
# using the 'committer-name'
https://github.com/search?q=committer-name%3Agithubusername&type=commits&s=committer-date&o=desc

Try to use the date ranges to get a list of commits, by authored or committed date:

https://github.com/search?q=author%3Agithubusername&type=commits&s=committer-date&created%3A%3E2025-02-07&o=desc

Other syntax to try:

author-date%3A<2025-02-02&type=Commits

author-date
author-date:>2016-01-01

https://github.com/search?q=created%3A%3E2025-02-07&type=Repositories&ref=advsearch&l=&l=

Using your Google Calendar to keep track of time

If you use Google Calendar, you can also use Google Apps Script to print a list of calendar events for a specified date range.

  • Create a new project in Google Apps Script, you can name it Calendar info
  • In the project, create a new file named CalendarEntries
  • Add the following functions:
// Global variables
var START_DATE = new Date('2025-02-23');
var END_DATE = new Date('2025-03-01');  // End date is exclusive, be sure to pick the date AFTER you want to search through
var CALENDAR_ID = 'myemail@gmail.com';

function logAllEventStatuses() {
  var calendar = CalendarApp.getCalendarById(CALENDAR_ID);
  var events = calendar.getEvents(START_DATE, END_DATE);

  events.forEach(function(event) {
    Logger.log('Event: ' + event.getTitle() + ', My Status: ' + event.getMyStatus());
  });
}

function listCalendarEntries() {
  var calendar = CalendarApp.getCalendarById(CALENDAR_ID);
  var events = calendar.getEvents(START_DATE, END_DATE);

  Logger.log('Total events retrieved: ' + events.length);

  // Group events by date
  var eventsByDate = {};
  events.forEach(function(event) {
    var dateStr = Utilities.formatDate(event.getStartTime(), Session.getScriptTimeZone(), 'yyyy-MM-dd');
    if (!eventsByDate[dateStr]) {
      eventsByDate[dateStr] = [];
    }
    // Calculate duration in hours as a decimal
    var durationHours = (event.getEndTime() - event.getStartTime()) / (1000 * 60 * 60);
    eventsByDate[dateStr].push(`${event.getTitle()} (Duration: ${durationHours} hours)`);
  });

  // Create a text output grouped by date
  var output = '';
  Object.keys(eventsByDate).sort().forEach(function(date) {
    output += date + ':\n';
    eventsByDate[date].forEach(function(eventText) {
      output += ' - ' + eventText + '\n';
    });
    output += '\n';
  });

  Logger.log(output);
}

function logAcceptedEvents() {
  // this function does not work as intended yet, need to find out the proper 'myStatus' filter 
  //
  // For this function, you can decide if you want to use a different date range.
  // For now, we'll reuse the global dates.
  var calendar = CalendarApp.getCalendarById(CALENDAR_ID);
  var events = calendar.getEvents(START_DATE, END_DATE);

  Logger.log('Total events retrieved: ' + events.length);

  events.forEach(function(event) {
    var myStatus = event.getMyStatus();
    // Filter events that are either owned or accepted
    if (myStatus === 'OWNER' || myStatus === 'YES' || myStatus === 'accepted') {
      Logger.log('Accepted Event: ' + event.getTitle() +
                 ' | Start: ' + event.getStartTime() +
                 ' | End: ' + event.getEndTime() +
                 ' | Status: ' + myStatus);
    }
  });
}
  • Update the CALENDAR_ID with your email address
  • Update the START_DATE to the first day you want to get calendar entries for
  • Update the END_DATE to the date AFTER you want to search through
    • Example: if you want to search through 2025-03-01, then enter 2025-03-02
  • Save the changes to the CalendarEntries file
Running the listCalendarEntries function

Once you have stored your email address in the CALENDAR_ID and updated the START_DATE and END_DATE variables, now you can run the listCalendarEntries function:

  • Confirm the variables have been set correctly
  • Select the listCalendarEntries in the drop down menu
  • Select the Run option in the menu

google-apps-script-select-function