MayrosSkills Hub
Back to browse

test-generator

ApiliumApiliumv1.0.33900
officialplatinum (7/8)Clean scan

Install

mayros skill install test-generator
mayros skill install [email protected]

Generate unit/integration/e2e tests with framework detection

README


name: test-generator description: Generate unit/integration/e2e tests with framework detection type: semantic user-invocable: true semantic: skillVersion: 1 permissions: graph: [read, write] memory: [recall, remember] assertions: - predicate: "test:generated" requireProof: false - predicate: "test:coverage_gap" requireProof: false - predicate: "test:framework_detected" requireProof: false queries: - predicate: "test:generated" scope: agent - predicate: "test:coverage_context" scope: namespace

test-generator

Automated test generation with framework detection, coverage gap analysis, and multi-level test strategy. Generates complete, runnable test files with proper imports, mocks, and descriptive names.

Framework Detection

The skill automatically detects the testing framework in use by examining project files and code patterns:

Vitest

  • Config files: vitest.config.ts, vitest.config.js, vite.config.ts with test section
  • Code indicators: vi.fn(), vi.mock(), vi.spyOn(), vi.stubGlobal()
  • Imports: import { describe, it, expect, vi } from "vitest"
  • Features: native ESM, TypeScript without transpilation, in-source testing, concurrent tests

Jest

  • Config files: jest.config.ts, jest.config.js, package.json with jest key
  • Code indicators: jest.fn(), jest.mock(), jest.spyOn()
  • Imports: import { describe, it, expect, jest } from "@jest/globals" (ESM) or globals
  • Features: snapshot testing, fake timers, module mocking, coverage built-in

Mocha

  • Config files: .mocharc.yml, .mocharc.json, mocha in package.json scripts
  • Code indicators: describe(), it(), before(), after(), chai.expect
  • Imports: import { expect } from "chai", import { describe, it } from "mocha"
  • Features: flexible assertion libraries, BDD/TDD interfaces, browser support

Pytest

  • Config files: pytest.ini, pyproject.toml with [tool.pytest], conftest.py
  • Code indicators: @pytest.fixture, @pytest.mark.parametrize, monkeypatch
  • Imports: import pytest, from conftest import ...
  • Features: fixtures, parametrize, markers, plugins, conftest inheritance

Go test

  • Config files: go.mod, *_test.go files
  • Code indicators: testing.T, t.Run(), t.Parallel(), t.Helper()
  • Patterns: table-driven tests, TestXxx(t *testing.T), subtests
  • Features: built-in benchmarking, race detector, coverage profiling

Test Strategy

Tests are generated at three levels, in order of priority:

Unit Tests (Highest Priority)

  • Test individual functions and methods in isolation
  • Mock all external dependencies
  • Focus on: input/output correctness, error paths, edge cases
  • Target: every exported function, every branch, every error path

Integration Tests

  • Test module interactions and data flow
  • Use real implementations where feasible, mock external services
  • Focus on: API contracts, data transformation pipelines, middleware chains
  • Target: module boundaries, database interactions, service communication

End-to-End Tests

  • Test complete user flows
  • Minimize mocking, use test infrastructure
  • Focus on: critical user paths, authentication flows, data lifecycle
  • Target: signup-to-completion flows, error recovery scenarios

Test Writing Guidelines

AAA Pattern

Every test follows the Arrange-Act-Assert pattern:

it("should return empty array when no items match filter", () => {
  // Arrange
  const items = [{ name: "a", active: false }, { name: "b", active: false }];

  // Act
  const result = filterActive(items);

  // Assert
  expect(result).toEqual([]);
});

Naming Convention

Test names follow the pattern: "should [expected behavior] when [condition]"

Examples:

  • "should return 404 when user does not exist"
  • "should retry three times when connection fails"
  • "should hash password with bcrypt when creating user"

Edge Cases

Always generate tests for these edge cases:

  • null/undefined: Pass null, undefined, missing properties
  • Empty values: Empty strings "", empty arrays [], empty objects {}
  • Boundary values: 0, -1, Number.MAX_SAFE_INTEGER, empty buffers
  • Error paths: Network failures, invalid input, permission denied, timeout
  • Concurrent access: Race conditions, parallel mutations, lock contention

Language-Specific Patterns

Vitest / TypeScript

import { describe, it, expect, vi, beforeEach } from "vitest";

vi.mock("./database.js", () => ({
  query: vi.fn(),
}));

describe("UserService", () => {
  beforeEach(() => { vi.clearAllMocks(); });
  it("should create user with hashed password", async () => { /* ... */ });
});

Jest / TypeScript

jest.mock("./database");
const mockQuery = jest.fn();

describe("UserService", () => {
  beforeEach(() => { jest.clearAllMocks(); });
  it("should create user with hashed password", async () => { /* ... */ });
});

Pytest / Python

import pytest
from unittest.mock import AsyncMock, patch

@pytest.fixture
def user_service(db_session):
    return UserService(db_session)

@pytest.mark.parametrize("input,expected", [
    ("[email protected]", True),
    ("invalid", False),
    ("", False),
])
def test_validate_email(user_service, input, expected):
    assert user_service.validate_email(input) == expected

Go / Table-Driven Tests

func TestValidateEmail(t *testing.T) {
    tests := []struct {
        name  string
        input string
        want  bool
    }{
        {"valid email", "[email protected]", true},
        {"missing at", "userexample.com", false},
        {"empty", "", false},
    }
    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            got := ValidateEmail(tt.input)
            if got != tt.want {
                t.Errorf("ValidateEmail(%q) = %v, want %v", tt.input, got, tt.want)
            }
        })
    }
}

Coverage Gap Analysis

The skill identifies functions and code paths lacking test coverage:

  • High priority: Public API functions, security-sensitive code, error handlers
  • Medium priority: Internal helpers with complex logic, configuration parsers
  • Low priority: Simple getters/setters, type definitions, constants

Semantic Integration

  • Use skill_assert with test:generated when tests are produced.
  • Use skill_assert with test:coverage_gap when untested code paths are identified.
  • Use skill_assert with test:framework_detected after framework detection completes.
  • Consult skill_memory_context for coverage context and previously generated tests.

Versions

v1.0.3Feb 27, 2026
v1.0.2Feb 27, 2026
v1.0.1Feb 27, 2026
v1.0.0Feb 26, 2026

Comments

Sign in to leave a comment.

Loading comments...