Skip to content

[mlir][docs] Clarified Dialect creation tutorial + fixed typos #77820

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 15 additions & 15 deletions mlir/docs/DefiningDialects/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ extends to all of the MLIR constructs, including [Interfaces](../Interfaces.md)

```tablegen
// Include the definition of the necessary tablegen constructs for defining
// our dialect.
// our dialect.
include "mlir/IR/DialectBase.td"

// Here is a simple definition of a dialect.
Expand Down Expand Up @@ -84,7 +84,7 @@ void MyDialect::initialize() {

The `summary` and `description` fields allow for providing user documentation
for the dialect. The `summary` field expects a simple single-line string, with the
`description` field used for long and extensive documentation. This documentation can be
`description` field used for long and extensive documentation. This documentation can be
used to generate markdown documentation for the dialect and is used by upstream
[MLIR dialects](https://mlir.llvm.org/docs/Dialects/).

Expand Down Expand Up @@ -133,7 +133,7 @@ void MyOp::setOtherValue(StringAttr newValue);

### Dependent Dialects

MLIR has a very large ecosystem, and contains dialects that server many different purposes. It
MLIR has a very large ecosystem, and contains dialects that serve many different purposes. It
is quite common, given the above, that dialects may want to reuse certain components from other
dialects. This may mean generating operations from those dialects during canonicalization, reusing
attributes or types, etc. When a dialect has a dependency on another, i.e. when it constructs and/or
Expand Down Expand Up @@ -230,7 +230,7 @@ is verified. The hook necessary for the dialect to implement has the form:
/// Verify the use of the given attribute, whose name is prefixed by the namespace of this
/// dialect, that was used on the attribute dictionary of a region entry block argument.
/// Note: As described above, when a region entry block has a dictionary is up to the individual
/// operation to define.
/// operation to define.
LogicalResult MyDialect::verifyRegionArgAttribute(Operation *op, unsigned regionIndex,
unsigned argIndex, NamedAttribute attribute);
```
Expand All @@ -250,16 +250,16 @@ has the form:
/// Generate verification for the given attribute, whose name is prefixed by the namespace
/// of this dialect, that was used on the attribute dictionary of a region result.
/// Note: As described above, when a region entry block has a dictionary is up to the individual
/// operation to define.
/// operation to define.
LogicalResult MyDialect::verifyRegionResultAttribute(Operation *op, unsigned regionIndex,
unsigned argIndex, NamedAttribute attribute);
```

### Operation Interface Fallback

Some dialects have an open ecosystem and don't register all of the possible operations. In such
cases it is still possible to provide support for implementing an `OpInterface` for these
operations. When an operation isn't registered or does not provide an implementation for an
cases it is still possible to provide support for implementing an `OpInterface` for these
operations. When an operation isn't registered or does not provide an implementation for an
interface, the query will fallback to the dialect itself. The `hasOperationInterfaceFallback`
field may be used to declare this fallback for operations:

Expand All @@ -269,10 +269,10 @@ field may be used to declare this fallback for operations:
void *MyDialect::getRegisteredInterfaceForOp(TypeID typeID, StringAttr opName);
```

For a more detail description of the expected usages of this hook, view the detailed
For a more detail description of the expected usages of this hook, view the detailed
[interface documentation](../Interfaces.md#dialect-fallback-for-opinterface).

### Default Attribute/Type Parsers and Printers
### Default Attribute/Type Parsers and Printers

When a dialect registers an Attribute or Type, it must also override the respective
`Dialect::parseAttribute`/`Dialect::printAttribute` or
Expand All @@ -286,7 +286,7 @@ parser and printer of its Attributes and Types it should set these to `0` as nec

### Dialect-wide Canonicalization Patterns

Generally, [canonicalization](../Canonicalization.md) patterns are specific to individual
Generally, [canonicalization](../Canonicalization.md) patterns are specific to individual
operations within a dialect. There are some cases, however, that prompt canonicalization
patterns to be added to the dialect-level. For example, if a dialect defines a canonicalization
pattern that operates on an interface or trait, it can be beneficial to only add this pattern
Expand Down Expand Up @@ -514,7 +514,7 @@ AbstractOperation::VerifyInvariantsFn verifyFn = [](Operation* op) {
AbstractOperation::ParseAssemblyFn parseFn =
[](OpAsmParser &parser, OperationState &state) {
// Parse the operation, given that the name is already parsed.
...
...
};

// Printer function
Expand All @@ -526,14 +526,14 @@ auto printFn = [](Operation *op, OpAsmPrinter &printer) {

// General folder implementation, see AbstractOperation::foldHook for more
// information.
auto foldHookFn = [](Operation * op, ArrayRef<Attribute> operands,
auto foldHookFn = [](Operation * op, ArrayRef<Attribute> operands,
SmallVectorImpl<OpFoldResult> &result) {
...
};

// Returns any canonicalization pattern rewrites that the operation
// supports, for use by the canonicalization pass.
auto getCanonicalizationPatterns =
auto getCanonicalizationPatterns =
[](RewritePatternSet &results, MLIRContext *context) {
...
}
Expand Down Expand Up @@ -635,7 +635,7 @@ though overriden `parseType` methods need to add the necessary support for them.
```c++
Type MyDialect::parseType(DialectAsmParser &parser) const {
...

// The type name.
StringRef typeTag;
if (failed(parser.parseKeyword(&typeTag)))
Expand All @@ -649,7 +649,7 @@ Type MyDialect::parseType(DialectAsmParser &parser) const {
return dynType;
return Type();
}

...
}
```
Expand Down
14 changes: 10 additions & 4 deletions mlir/docs/Tutorials/CreatingADialect.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ Public dialects are typically separated into at least 3 directories:
* mlir/test/Dialect/Foo (for tests)

Along with other public headers, the 'include' directory contains a
TableGen file in the [ODS format](../DefiningDialects/Operations.md), describing the
operations in the dialect. This is used to generate operation
declarations (FooOps.h.inc) and definitions (FooOps.cpp.inc) and
operation interface declarations (FooOpsInterfaces.h.inc) and
TableGen file in the [ODS format](../DefiningDialects/Operations.md),
describing the operations in the dialect. This is used to generate
operation declarations (FooOps.h.inc) and definitions (FooOps.cpp.inc)
and operation interface declarations (FooOpsInterfaces.h.inc) and
definitions (FooOpsInterfaces.cpp.inc).

The 'IR' directory typically contains implementations of functions for
Expand Down Expand Up @@ -106,6 +106,12 @@ the LINK_COMPONENTS descriptor. This allows cmake infrastructure to
generate new library targets with correct linkage, in particular, when
BUILD_SHARED_LIBS=on or LLVM_LINK_LLVM_DYLIB=on are specified.

Registration of the dialect can be performed globally by editing the
file include/mlir/InitAllDialects.h. In this example, we can add
Comment on lines +109 to +110
Copy link
Collaborator

@joker-eph joker-eph Feb 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is framed as if it is a possibility: I believe this isn't optional, all dialects should be there.

I would think that you can't write an muir-opt test without this anyway.

Copy link
Contributor

@VitalyAnkh VitalyAnkh Aug 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@joker-eph That's not true. For example, vcix dialect isn't registered in include/mlir/InitAllDialects.h, but you can still use the --test-math-to-vcix pass with mlir-opt. Dialect registration can be skipped if mlir-opt only generates the dialect, rather than parsing it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vcix dialect isn't registered in include/mlir/InitAllDialects.h,

This would be an oversight.

Actually, it may even be worse than that: dialects are in general testing the syntax of their ops, and that requires dialect registration.

Dialect registration can be skipped if mlir-opt only generates the dialect, rather than parsing it.

Sure: there is no use-case for this upstream though, I consider it a bug that a dialect wouldn't be part of InitAllDialects.h (we don't want this to be a misnomer).

`foo::FooDialect` to the `registry.insert` operation. This will
make the dialect available to all MLIR programs which initiate their
registries with `registerAllDialects` (e.g. mlir-opt). Alternatively,
we can register the dialect locally in our required program.

# Dialect Conversions

Expand Down
4 changes: 2 additions & 2 deletions mlir/include/mlir/Analysis/DataFlow/DenseAnalysis.h
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ class DenseForwardDataFlowAnalysis

/// Base class for dense backward dataflow analyses. Such analyses attach a
/// lattice between the execution of operations and implement a transfer
/// function from the lattice after the operation ot the lattice before it, thus
/// function from the lattice after the operation on the lattice before it, thus
/// propagating backward.
///
/// In this implementation, a lattice attached to an operation represents the
Expand Down Expand Up @@ -426,7 +426,7 @@ class AbstractDenseBackwardDataFlowAnalysis : public DataFlowAnalysis {

/// A dense backward dataflow analysis propagating lattices after and before the
/// execution of every operation across the IR by implementing transfer
/// functions for opreations.
/// functions for operations.
///
/// `LatticeT` is expected to be a subclass of `AbstractDenseLattice`.
template <typename LatticeT>
Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Analysis/DataFlow/IntegerRangeAnalysis.h
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ class IntegerRangeAnalysis
public:
using SparseForwardDataFlowAnalysis::SparseForwardDataFlowAnalysis;

/// At an entry point, we cannot reason about interger value ranges.
/// At an entry point, we cannot reason about integer value ranges.
void setToEntryState(IntegerValueRangeLattice *lattice) override {
propagateIfChanged(lattice, lattice->join(IntegerValueRange::getMaxRange(
lattice->getPoint())));
Expand Down
4 changes: 2 additions & 2 deletions mlir/include/mlir/Analysis/DataFlowFramework.h
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ class DataFlowSolver {
/// these requirements.
///
/// 1. Querying the state of a program point prior to visiting that point
/// results in uninitialized state. Analyses must be aware of unintialized
/// results in uninitialized state. Analyses must be aware of uninitialized
/// states.
/// 2. Analysis states can reach fixpoints, where subsequent updates will never
/// trigger a change in the state.
Expand Down Expand Up @@ -462,7 +462,7 @@ class DataFlowAnalysis {
const DataFlowConfig &getSolverConfig() const { return solver.getConfig(); }

#if LLVM_ENABLE_ABI_BREAKING_CHECKS
/// When compiling with debugging, keep a name for the analyis.
/// When compiling with debugging, keep a name for the analysis.
StringRef debugName;
#endif // LLVM_ENABLE_ABI_BREAKING_CHECKS

Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Analysis/Presburger/MPInt.h
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ namespace presburger {
/// identically-named functions that operate on MPInts, which would otherwie
/// become the only candidates of overload resolution when calling e.g. ceilDiv
/// from the mlir::presburger namespace. So to access the 64-bit overloads, an
/// explict call to mlir::ceilDiv would be required. These using declarations
/// explicit call to mlir::ceilDiv would be required. These using declarations
/// allow overload resolution to transparently call the right function.
using ::mlir::ceilDiv;
using ::mlir::floorDiv;
Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Analysis/Presburger/Matrix.h
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ class Matrix {

/// Construct a matrix with the specified number of rows and columns.
/// The number of reserved rows and columns will be at least the number
/// specified, and will always be sufficient to accomodate the number of rows
/// specified, and will always be sufficient to accommodate the number of rows
/// and columns specified.
///
/// Initially, the entries are initialized to ero.
Expand Down
4 changes: 2 additions & 2 deletions mlir/include/mlir/Analysis/Presburger/PWMAFunction.h
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ class MultiAffineFunction {
/// The space of this function. The domain variables are considered as the
/// input variables of the function. The range variables are considered as
/// the outputs. The symbols parametrize the function and locals are used to
/// represent divisions. Each local variable has a corressponding division
/// represent divisions. Each local variable has a corresponding division
/// representation stored in `divs`.
PresburgerSpace space;

Expand Down Expand Up @@ -239,7 +239,7 @@ class PWMAFunction {

/// The space of this function. The domain variables are considered as the
/// input variables of the function. The range variables are considered as
/// the outputs. The symbols paramterize the function.
/// the outputs. The symbols parameterize the function.
PresburgerSpace space;

// The pieces of the PWMAFunction.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ class PresburgerSet : public PresburgerRelation {
explicit PresburgerSet(const PresburgerRelation &set);

/// These operations are the same as the ones in PresburgeRelation, they just
/// forward the arguement and return the result as a set instead of a
/// forward the argument and return the result as a set instead of a
/// relation.
PresburgerSet unionSet(const PresburgerRelation &set) const;
PresburgerSet intersect(const PresburgerRelation &set) const;
Expand Down
8 changes: 4 additions & 4 deletions mlir/include/mlir/Analysis/Presburger/Simplex.h
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ class SimplexBase {
SmallVector<UndoLogEntry, 8> undoLog;

/// Holds a vector of bases. The ith saved basis is the basis that should be
/// restored when processing the ith occurrance of UndoLogEntry::RestoreBasis
/// restored when processing the ith occurrence of UndoLogEntry::RestoreBasis
/// in undoLog. This is used by getSnapshotBasis.
SmallVector<SmallVector<int, 8>, 8> savedBases;

Expand All @@ -371,7 +371,7 @@ class SimplexBase {
///
/// This does not directly support negative-valued variables, so it uses the big
/// M parameter trick to make all the variables non-negative. Basically we
/// introduce an artifical variable M that is considered to have a value of
/// introduce an artificial variable M that is considered to have a value of
/// +infinity and instead of the variables x, y, z, we internally use variables
/// M + x, M + y, M + z, which are now guaranteed to be non-negative. See the
/// documentation for SimplexBase for more details. M is also considered to be
Expand Down Expand Up @@ -565,7 +565,7 @@ struct SymbolicLexOpt {
/// negative for all values in the symbol domain, the row needs to be pivoted
/// irrespective of the precise value of the symbols. To answer queries like
/// "Is this symbolic sample always negative in the symbol domain?", we maintain
/// a `LexSimplex domainSimplex` correponding to the symbol domain.
/// a `LexSimplex domainSimplex` corresponding to the symbol domain.
///
/// In other cases, it may be that the symbolic sample is violated at some
/// values in the symbol domain and not violated at others. In this case,
Expand Down Expand Up @@ -756,7 +756,7 @@ class Simplex : public SimplexBase {
/// the ones marked redundant because we scan from left to right. Thus, when
/// there is some preference among the constraints as to which should be
/// marked redundant with priority when there are multiple possibilities, this
/// could be accomplished by succesive calls to detectRedundant(offset,
/// could be accomplished by successive calls to detectRedundant(offset,
/// count).
void detectRedundant(unsigned offset, unsigned count);
void detectRedundant(unsigned offset) {
Expand Down
16 changes: 10 additions & 6 deletions mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ class AffineDmaStartOp
/// Returns the affine map used to access the source memref.
AffineMap getSrcMap() { return getSrcMapAttr().getValue(); }
AffineMapAttr getSrcMapAttr() {
return cast<AffineMapAttr>(*(*this)->getInherentAttr(getSrcMapAttrStrName()));
return cast<AffineMapAttr>(
*(*this)->getInherentAttr(getSrcMapAttrStrName()));
}

/// Returns the source memref affine map indices for this DMA operation.
Expand Down Expand Up @@ -156,7 +157,8 @@ class AffineDmaStartOp
/// Returns the affine map used to access the destination memref.
AffineMap getDstMap() { return getDstMapAttr().getValue(); }
AffineMapAttr getDstMapAttr() {
return cast<AffineMapAttr>(*(*this)->getInherentAttr(getDstMapAttrStrName()));
return cast<AffineMapAttr>(
*(*this)->getInherentAttr(getDstMapAttrStrName()));
}

/// Returns the destination memref indices for this DMA operation.
Expand Down Expand Up @@ -185,7 +187,8 @@ class AffineDmaStartOp
/// Returns the affine map used to access the tag memref.
AffineMap getTagMap() { return getTagMapAttr().getValue(); }
AffineMapAttr getTagMapAttr() {
return cast<AffineMapAttr>(*(*this)->getInherentAttr(getTagMapAttrStrName()));
return cast<AffineMapAttr>(
*(*this)->getInherentAttr(getTagMapAttrStrName()));
}

/// Returns the tag memref indices for this DMA operation.
Expand All @@ -201,7 +204,7 @@ class AffineDmaStartOp
getTagMap().getNumInputs());
}

/// Impelements the AffineMapAccessInterface.
/// Implements the AffineMapAccessInterface.
/// Returns the AffineMapAttr associated with 'memref'.
NamedAttribute getAffineMapAttrForMemRef(Value memref) {
if (memref == getSrcMemRef())
Expand Down Expand Up @@ -307,7 +310,8 @@ class AffineDmaWaitOp
/// Returns the affine map used to access the tag memref.
AffineMap getTagMap() { return getTagMapAttr().getValue(); }
AffineMapAttr getTagMapAttr() {
return cast<AffineMapAttr>(*(*this)->getInherentAttr(getTagMapAttrStrName()));
return cast<AffineMapAttr>(
*(*this)->getInherentAttr(getTagMapAttrStrName()));
}

/// Returns the tag memref index for this DMA operation.
Expand All @@ -321,7 +325,7 @@ class AffineDmaWaitOp
return cast<MemRefType>(getTagMemRef().getType()).getRank();
}

/// Impelements the AffineMapAccessInterface. Returns the AffineMapAttr
/// Implements the AffineMapAccessInterface. Returns the AffineMapAttr
/// associated with 'memref'.
NamedAttribute getAffineMapAttrForMemRef(Value memref) {
assert(memref == getTagMemRef());
Expand Down
6 changes: 3 additions & 3 deletions mlir/include/mlir/Dialect/Async/Passes.td
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def AsyncFuncToAsyncRuntime : Pass<"async-func-to-async-runtime", "ModuleOp"> {
def AsyncRuntimeRefCounting : Pass<"async-runtime-ref-counting"> {
let summary = "Automatic reference counting for Async runtime operations";
let description = [{
This pass works at the async runtime abtraction level, after all
This pass works at the async runtime abstraction level, after all
`async.execute` and `async.await` operations are lowered to the async
runtime API calls, and async coroutine operations.

Expand All @@ -83,7 +83,7 @@ def AsyncRuntimePolicyBasedRefCounting
: Pass<"async-runtime-policy-based-ref-counting"> {
let summary = "Policy based reference counting for Async runtime operations";
let description = [{
This pass works at the async runtime abtraction level, after all
This pass works at the async runtime abstraction level, after all
`async.execute` and `async.await` operations are lowered to the async
runtime API calls, and async coroutine operations.

Expand All @@ -103,7 +103,7 @@ def AsyncRuntimePolicyBasedRefCounting
(this is the last operation in the coroutine resume function).
3. After `async.runtime.load` operation for async values.

This pass introduces significanly less runtime overhead compared to the
This pass introduces significantly less runtime overhead compared to the
automatic reference counting.
}];

Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Dialect/GPU/IR/CompilationInterfaces.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ class TargetOptions {

protected:
/// Derived classes must use this constructor to initialize `typeID` to the
/// appropiate value: ie. `TargetOptions(TypeID::get<DerivedClass>())`.
/// appropriate value: ie. `TargetOptions(TypeID::get<DerivedClass>())`.
TargetOptions(
TypeID typeID, StringRef toolkitPath = {},
ArrayRef<std::string> linkFiles = {}, StringRef cmdOptions = {},
Expand Down
4 changes: 2 additions & 2 deletions mlir/include/mlir/Dialect/GPU/IR/GPUBase.td
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ def GPU_AsyncToken : DialectType<
GPU_Dialect, CPred<"::llvm::isa<::mlir::gpu::AsyncTokenType>($_self)">, "async token type">,
BuildableType<"mlir::gpu::AsyncTokenType::get($_builder.getContext())">;

// Predicat to check if type is gpu::MMAMatrixType.
// Predicate to check if type is gpu::MMAMatrixType.
def IsMMAMatrixTypePred : CPred<"::llvm::isa<::mlir::gpu::MMAMatrixType>($_self)">;

def GPU_MMAMatrix : DialectType<
Expand Down Expand Up @@ -139,7 +139,7 @@ def GPU_AsyncOpInterface : OpInterface<"AsyncOpInterface"> {
GPU operations implementing this interface take a list of dependencies
as `gpu.async.token` arguments and optionally return a `gpu.async.token`.

The op doesn't start executing until all depent ops producing the async
The op doesn't start executing until all dependent ops producing the async
dependency tokens have finished executing.

If the op returns a token, the op merely schedules the execution on the
Expand Down
2 changes: 1 addition & 1 deletion mlir/include/mlir/Dialect/GPU/IR/GPUOps.td
Original file line number Diff line number Diff line change
Expand Up @@ -1698,7 +1698,7 @@ def GPU_SubgroupMmaConstantMatrixOp : GPU_Op<"subgroup_mma_constant_matrix",

The operation takes a scalar input and return a `!gpu.mma_matrix` where
each element of is equal to the operand constant. The destination
mma_matrix type must have elememt type equal to the constant type. Since
mma_matrix type must have element type equal to the constant type. Since
the layout of `!gpu.mma_matrix` is opaque this only support setting all the
elements to the same value.

Expand Down
Loading