Skip to content

Conversation

@Shuang-cnt
Copy link
Collaborator

@Shuang-cnt Shuang-cnt commented Jan 28, 2026

Description

Expand sharding_dump.py to output logical axes and update unit test sharding_compare_test.py
Currently, two json files (logical_shardings.json and named_shardings.json ) will be generated per model/device_type/slice_number.

To prevent negatively impacting GitHub Actions CI run times, we will only check a limited set of golden files (Deepeek2 -16b/gpt-oss-20b/qwen-0.6b with tpu7x-16/v5p-16/v6e-16)into the MaxText repository

Command:

Get baseline sharding info:

There are two primary ways to use the script run_sharding_dump.py:

  1. Generate Sharding for All Predefined Test Cases

Run the script without any command-line arguments to iterate through all test
cases defined in tests.utils.sharding_dump.TEST_CASES. It will skip any
combination for which the output files already exist.

Command:
python3 -m tests.utils.run_sharding_dump

  1. Generate Sharding for a Single, Specific Case

Provide the model_name, topology, and num_slice as command-line arguments
to generate sharding information for a single configuration. You must provide
all three arguments.

Command:
python3 -m tests.utils.run_sharding_dump --model_name <model> --topology <topology> --num_slice <slices>

Example:
python3 -m tests.utils.run_sharding_dump --model_name gemma-7b --topology v5p-256 --num_slice 1

Compare sharding info:

python3 -m pytest tests/unit/sharding_compare_test.py -s -v -k "llama3.1-70b" 2>&1 | tee test_output.log

Example

Content in logical_shardings.json

".params/['params']/['decoder']/['layers']/['mlp']/['wi_0']/['kernel']": {
    "partition_spec": [
      "embed",
      "layers",
      "mlp"
    ],
    "shape": [
      8192,
      80,
      28672
    ]
  },

Content in named_shardings.json

".params/['params']/['decoder']/['layers']/['mlp']/['wi_0']/['kernel']": {
    "mesh": {
      "axis_names": [
        "data",
        "stage",
        "fsdp",
        "fsdp_transpose",
        "sequence",
        "context",
        "context_autoregressive",
        "tensor",
        "tensor_transpose",
        "tensor_sequence",
        "expert",
        "autoregressive"
      ],
      "shape": {
        "data": 1,
        "stage": 1,
        "fsdp": 16,
        "fsdp_transpose": 1,
        "sequence": 1,
        "context": 1,
        "context_autoregressive": 1,
        "tensor": 1,
        "tensor_transpose": 1,
        "tensor_sequence": 1,
        "expert": 1,
        "autoregressive": 1
      }
    },
    "partition_spec": [
      [
        "fsdp",
        "sequence",
        "tensor_transpose",
        "context",
        "expert"
      ],
      "stage",
      [
        "fsdp_transpose",
        "tensor",
        "tensor_sequence",
        "autoregressive"
      ]
    ],
    "shape": [
      8192,
      80,
      28672
    ]
  },

Tests

UT for sharding dump comparison Failed (physical weight) : https://paste.googleplex.com/6032726044049408

[FAIL] Physical Sharding Mismatch: llama3.1-70b v5e-16 slice 1

PartitionSpec mismatch at '.params/['params']/['decoder']/['decoder_norm']/['scale']':
  Expected (Physical): [['fsdp', 'tensor_transpose']]
  Actual (Physical): [['tensor', 'tensor_transpose']]

UT for sharding dump comparison Failed (logical): https://paste.googleplex.com/5855428334452736

[FAIL] Logical Sharding Mismatch: llama3.1-70b v5e-16 slice 1

Shape mismatch at '.params/['params']/['decoder']/['decoder_norm']/['scale']':
  Expected (Logical): [80]
  Actual (Logical): [8192]

UT for sharding dump comparison Successed: https://paste.googleplex.com/6737857618247680

Checklist

Before submitting this PR, please make sure (put X in square brackets):

  • I have performed a self-review of my code. For an optional AI review, add the gemini-review label.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have run end-to-end tests tests and provided workload links above if applicable.
  • I have made or will make corresponding changes to the doc if needed, including adding new documentation pages to the relevant Table of Contents (toctree directive) as explained in our documentation.

@Shuang-cnt Shuang-cnt force-pushed the user/sharony/exp_sharding_dump branch from 964aca4 to bd07254 Compare January 29, 2026 00:42
@codecov
Copy link

codecov bot commented Jan 29, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Collaborator

@richjames0 richjames0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@@ -23,7 +23,7 @@
from MaxText.train_compile import get_shaped_inputs, get_topology_mesh, validate_config
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like this test only verifies get_shaped_inputs and get_topology_mesh from train_compile. Please explicitly specify that this test is for train_compile only if this is the intention. Otherwise, I think it is better to test the get_abstract_state from maxtext_utils.

@Shuang-cnt Shuang-cnt force-pushed the user/sharony/exp_sharding_dump branch 2 times, most recently from bf0879b to 7ba2a20 Compare February 3, 2026 15:40
@Shuang-cnt Shuang-cnt force-pushed the user/sharony/exp_sharding_dump branch from 7ba2a20 to 277368f Compare February 3, 2026 15:47
@Shuang-cnt Shuang-cnt force-pushed the user/sharony/exp_sharding_dump branch from 277368f to 1321643 Compare February 4, 2026 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants