Module core::arch::wasm32

1.33.0 · source ·
Available on WebAssembly only.
Expand description

Platform-specific intrinsics for the wasm32 platform.

This module provides intrinsics specific to the WebAssembly architecture. Here you’ll find intrinsics specific to WebAssembly that aren’t otherwise surfaced somewhere in a cross-platform abstraction of std, and you’ll also find functions for leveraging WebAssembly proposals such as atomics and simd.

Intrinsics in the wasm32 module are modeled after the WebAssembly instructions that they represent. Most functions are named after the instruction they intend to correspond to, and the arguments/results correspond to the type signature of the instruction itself. Stable WebAssembly instructions are documented online.

If a proposal is not yet stable in WebAssembly itself then the functions within this function may be unstable and require the nightly channel of Rust to use. As the proposal itself stabilizes the intrinsics in this module should stabilize as well.

See the module documentation for general information about the arch module and platform intrinsics.

Atomics

The threads proposal for WebAssembly adds a number of instructions for dealing with multithreaded programs. Most instructions added in the atomics proposal are exposed in Rust through the std::sync::atomic module. Some instructions, however, don’t have direct equivalents in Rust so they’re exposed here instead.

Note that the instructions added in the atomics proposal can work in either a context with a shared wasm memory and without. These intrinsics are always available in the standard library, but you likely won’t be able to use them too productively unless you recompile the standard library (and all your code) with -Ctarget-feature=+atomics.

It’s also worth pointing out that multi-threaded WebAssembly and its story in Rust is still in a somewhat “early days” phase as of the time of this writing. Pieces should mostly work but it generally requires a good deal of manual setup. At this time it’s not as simple as “just call std::thread::spawn”, but it will hopefully get there one day!

SIMD

The simd proposal for WebAssembly added a new v128 type for a 128-bit SIMD register. It also added a large array of instructions to operate on the v128 type to perform data processing. Using SIMD on wasm is intended to be similar to as you would on x86_64, for example. You’d write a function such as:

#[cfg(target_arch = "wasm32")]
#[target_feature(enable = "simd128")]
unsafe fn uses_simd() {
    use std::arch::wasm32::*;
    // ...
}
Run

Unlike x86_64, however, WebAssembly does not currently have dynamic detection at runtime as to whether SIMD is supported (this is one of the motivators for the conditional sections and feature detection proposals, but that is still pretty early days). This means that your binary will either have SIMD and can only run on engines which support SIMD, or it will not have SIMD at all. For compatibility the standard library itself does not use any SIMD internally. Determining how best to ship your WebAssembly binary with SIMD is largely left up to you as it can be pretty nuanced depending on your situation.

To enable SIMD support at compile time you need to do one of two things:

  • First you can annotate functions with #[target_feature(enable = "simd128")]. This causes just that one function to have SIMD support available to it, and intrinsics will get inlined as usual in this situation.

  • Second you can compile your program with -Ctarget-feature=+simd128. This compilation flag blanket enables SIMD support for your entire compilation. Note that this does not include the standard library unless you recompile the standard library.

If you enable SIMD via either of these routes then you’ll have a WebAssembly binary that uses SIMD instructions, and you’ll need to ship that accordingly. Also note that if you call SIMD intrinsics but don’t enable SIMD via either of these mechanisms, you’ll still have SIMD generated in your program. This means to generate a binary without SIMD you’ll need to avoid both options above plus calling into any intrinsics in this module.

Structs

  • v128target_family="wasm"
    WASM-specific 128-bit wide SIMD vector type.

Functions

  • f32x4_relaxed_maddExperimentaltarget_family="wasm" and relaxed-simd
    Computes a * b + c with either one rounding or two roundings.
  • f32x4_relaxed_maxExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f32x4_max which is either f32x4_max or f32x4_pmax.
  • f32x4_relaxed_minExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f32x4_min which is either f32x4_min or f32x4_pmin.
  • f32x4_relaxed_nmaddExperimentaltarget_family="wasm" and relaxed-simd
    Computes -a * b + c with either one rounding or two roundings.
  • f64x2_relaxed_maddExperimentaltarget_family="wasm" and relaxed-simd
    Computes a * b + c with either one rounding or two roundings.
  • f64x2_relaxed_maxExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f64x2_max which is either f64x2_max or f64x2_pmax.
  • f64x2_relaxed_minExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of f64x2_min which is either f64x2_min or f64x2_pmin.
  • f64x2_relaxed_nmaddExperimentaltarget_family="wasm" and relaxed-simd
    Computes -a * b + c with either one rounding or two roundings.
  • i8x16_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i8x16_relaxed_swizzleExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i8x16_swizzle(a, s) which selects lanes from a using indices in s.
  • i16x8_relaxed_dot_i8x16_i7x16Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed dot-product instruction.
  • i16x8_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i16x8_relaxed_q15mulrExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i16x8_relaxed_q15mulr where if both lanes are i16::MIN then the result is either i16::MIN or i16::MAX.
  • i32x4_relaxed_dot_i8x16_i7x16_addExperimentaltarget_family="wasm" and relaxed-simd
    Similar to i16x8_relaxed_dot_i8x16_i7x16 except that the intermediate i16x8 result is fed into i32x4_extadd_pairwise_i16x8 followed by i32x4_add to add the value c to the result.
  • i32x4_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • i32x4_relaxed_trunc_f32x4Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i32x4_trunc_sat_f32x4(a) converts the f32 lanes of a to signed 32-bit integers.
  • i32x4_relaxed_trunc_f64x2_zeroExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of i32x4_trunc_sat_f64x2_zero(a) converts the f64 lanes of a to signed 32-bit integers and the upper two lanes are zero.
  • i64x2_relaxed_laneselectExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of v128_bitselect where this either behaves the same as v128_bitselect or the high bit of each lane m is inspected and the corresponding lane of a is chosen if the bit is 1 or the lane of b is chosen if it’s zero.
  • memory_atomic_notifyExperimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.notify instruction
  • memory_atomic_wait32Experimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.wait32 instruction
  • memory_atomic_wait64Experimentaltarget_family="wasm" and atomics
    Corresponding intrinsic to wasm’s memory.atomic.wait64 instruction
  • u32x4_relaxed_trunc_f32x4Experimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of u32x4_trunc_sat_f32x4(a) converts the f32 lanes of a to unsigned 32-bit integers.
  • u32x4_relaxed_trunc_f64x2_zeroExperimentaltarget_family="wasm" and relaxed-simd
    A relaxed version of u32x4_trunc_sat_f64x2_zero(a) converts the f64 lanes of a to unsigned 32-bit integers and the upper two lanes are zero.
  • f32x4target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • f32x4_abstarget_family="wasm" and simd128
    Calculates the absolute value of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_addtarget_family="wasm" and simd128
    Lane-wise addition of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_ceiltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not smaller than the input.
  • f32x4_convert_i32x4target_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit signed integers into a 128-bit vector of four 32-bit floating point numbers.
  • f32x4_convert_u32x4target_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit unsigned integers into a 128-bit vector of four 32-bit floating point numbers.
  • f32x4_demote_f64x2_zerotarget_family="wasm" and simd128
    Conversion of the two double-precision floating point lanes to two lower single-precision lanes of the result. The two higher lanes of the result are initialized to zero. If the conversion result is not representable as a single-precision floating point number, it is rounded to the nearest-even representable number.
  • f32x4_divtarget_family="wasm" and simd128
    Lane-wise division of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed f32 numbers.
  • f32x4_floortarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not greater than the input.
  • f32x4_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_maxtarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_mintarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_multarget_family="wasm" and simd128
    Lane-wise multiplication of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.
  • f32x4_nearesttarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.
  • f32x4_negtarget_family="wasm" and simd128
    Negates each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_pmaxtarget_family="wasm" and simd128
    Lane-wise maximum value, defined as a < b ? b : a
  • f32x4_pmintarget_family="wasm" and simd128
    Lane-wise minimum value, defined as b < a ? b : a
  • f32x4_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed f32 numbers.
  • f32x4_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • f32x4_sqrttarget_family="wasm" and simd128
    Calculates the square root of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.
  • f32x4_subtarget_family="wasm" and simd128
    Lane-wise subtraction of two 128-bit vectors interpreted as four 32-bit floating point numbers.
  • f32x4_trunctarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.
  • f64x2target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • f64x2_abstarget_family="wasm" and simd128
    Calculates the absolute value of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_addtarget_family="wasm" and simd128
    Lane-wise add of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_ceiltarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not smaller than the input.
  • f64x2_convert_low_i32x4target_family="wasm" and simd128
    Lane-wise conversion from integer to floating point.
  • f64x2_convert_low_u32x4target_family="wasm" and simd128
    Lane-wise conversion from integer to floating point.
  • f64x2_divtarget_family="wasm" and simd128
    Lane-wise divide of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed f64 numbers.
  • f64x2_floortarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value not greater than the input.
  • f64x2_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_maxtarget_family="wasm" and simd128
    Calculates the lane-wise maximum of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_mintarget_family="wasm" and simd128
    Calculates the lane-wise minimum of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_multarget_family="wasm" and simd128
    Lane-wise multiply of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.
  • f64x2_nearesttarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.
  • f64x2_negtarget_family="wasm" and simd128
    Negates each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_pmaxtarget_family="wasm" and simd128
    Lane-wise maximum value, defined as a < b ? b : a
  • f64x2_pmintarget_family="wasm" and simd128
    Lane-wise minimum value, defined as b < a ? b : a
  • f64x2_promote_low_f32x4target_family="wasm" and simd128
    Conversion of the two lower single-precision floating point lanes to the two double-precision lanes of the result.
  • f64x2_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed f64 numbers.
  • f64x2_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • f64x2_sqrttarget_family="wasm" and simd128
    Calculates the square root of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.
  • f64x2_subtarget_family="wasm" and simd128
    Lane-wise subtract of two 128-bit vectors interpreted as two 64-bit floating point numbers.
  • f64x2_trunctarget_family="wasm" and simd128
    Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.
  • i8x16target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i8x16_abstarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i8x16_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • i8x16_add_sattarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MAX.
  • i8x16_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i8x16_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i8x16_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • i8x16_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 16 packed i8 numbers.
  • i8x16_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.
  • i8x16_maxtarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i8x16_mintarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i8x16_narrow_i16x8target_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • i8x16_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • i8x16_negtarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as sixteen 8-bit signed integers
  • i8x16_popcnttarget_family="wasm" and simd128
    Count the number of bits set to one within each lane.
  • i8x16_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 16 packed i8 numbers.
  • i8x16_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i8x16_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i8x16_shuffletarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.
  • i8x16_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i8x16_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • i8x16_sub_sattarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MIN.
  • i8x16_swizzletarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.
  • i16x8target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i16x8_abstarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i16x8_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit integers.
  • i16x8_add_sattarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MAX.
  • i16x8_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i16x8_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i16x8_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • i16x8_extadd_pairwise_i8x16target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i16x8_extadd_pairwise_u8x16target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i16x8_extend_high_i8x16target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i16x8_extend_high_u8x16target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i16x8_extend_low_i8x16target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i16x8_extend_low_u8x16target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i16x8_extmul_high_i8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_high_u8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_low_i8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extmul_low_u8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i16x8_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 8 packed i16 numbers.
  • i16x8_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_load_extend_i8x8target_family="wasm" and simd128
    Load eight 8-bit integers and sign extend each one to a 16-bit lane
  • i16x8_load_extend_u8x8target_family="wasm" and simd128
    Load eight 8-bit integers and zero extend each one to a 16-bit lane
  • i16x8_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit signed integers.
  • i16x8_maxtarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i16x8_mintarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i16x8_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.
  • i16x8_narrow_i32x4target_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • i16x8_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • i16x8_negtarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as eight 16-bit signed integers
  • i16x8_q15mulr_sattarget_family="wasm" and simd128
    Lane-wise saturating rounding multiplication in Q15 format.
  • i16x8_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 8 packed i16 numbers.
  • i16x8_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i16x8_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i16x8_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.
  • i16x8_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i16x8_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.
  • i16x8_sub_sattarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MIN.
  • i32x4target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i32x4_abstarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i32x4_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed four 32-bit integers.
  • i32x4_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i32x4_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i32x4_dot_i16x8target_family="wasm" and simd128
    Lane-wise multiply signed 16-bit integers in the two input vectors and add adjacent pairs of the full 32-bit results.
  • i32x4_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • i32x4_extadd_pairwise_i16x8target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i32x4_extadd_pairwise_u16x8target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • i32x4_extend_high_i16x8target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i32x4_extend_high_u16x8target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i32x4_extend_low_i16x8target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i32x4_extend_low_u16x8target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i32x4_extmul_high_i16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_high_u16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_low_i16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extmul_low_u16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i32x4_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed i32 numbers.
  • i32x4_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_load_extend_i16x4target_family="wasm" and simd128
    Load four 16-bit integers and sign extend each one to a 32-bit lane
  • i32x4_load_extend_u16x4target_family="wasm" and simd128
    Load four 16-bit integers and zero extend each one to a 32-bit lane
  • i32x4_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.
  • i32x4_maxtarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the maximum of each pair.
  • i32x4_mintarget_family="wasm" and simd128
    Compares lane-wise signed integers, and returns the minimum of each pair.
  • i32x4_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.
  • i32x4_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • i32x4_negtarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as four 32-bit signed integers
  • i32x4_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed i32 numbers.
  • i32x4_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i32x4_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i32x4_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.
  • i32x4_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i32x4_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.
  • i32x4_trunc_sat_f32x4target_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit signed integers.
  • i32x4_trunc_sat_f64x2_zerotarget_family="wasm" and simd128
    Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.
  • i64x2target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • i64x2_abstarget_family="wasm" and simd128
    Lane-wise wrapping absolute value.
  • i64x2_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed two 64-bit integers.
  • i64x2_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • i64x2_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • i64x2_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • i64x2_extend_high_i32x4target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, sign extended.
  • i64x2_extend_high_u32x4target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • i64x2_extend_low_i32x4target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, sign extended.
  • i64x2_extend_low_u32x4target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • i64x2_extmul_high_i32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_high_u32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_low_i32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extmul_low_u32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • i64x2_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed i64 numbers.
  • i64x2_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_load_extend_i32x2target_family="wasm" and simd128
    Load two 32-bit integers and sign extend each one to a 64-bit lane
  • i64x2_load_extend_u32x2target_family="wasm" and simd128
    Load two 32-bit integers and zero extend each one to a 64-bit lane
  • i64x2_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit signed integers.
  • i64x2_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.
  • i64x2_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • i64x2_negtarget_family="wasm" and simd128
    Negates a 128-bit vectors interpreted as two 64-bit signed integers
  • i64x2_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed i64 numbers.
  • i64x2_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • i64x2_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, sign extending.
  • i64x2_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.
  • i64x2_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • i64x2_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.
  • memory_growtarget_family="wasm"
    Corresponding intrinsic to wasm’s memory.grow instruction
  • memory_sizetarget_family="wasm"
    Corresponding intrinsic to wasm’s memory.size instruction
  • u8x16target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u8x16_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • u8x16_add_sattarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to u8::MAX.
  • u8x16_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u8x16_avgrtarget_family="wasm" and simd128
    Lane-wise rounding average.
  • u8x16_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u8x16_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • u8x16_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 16 packed u8 numbers.
  • u8x16_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.
  • u8x16_maxtarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u8x16_mintarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u8x16_narrow_i16x8target_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • u8x16_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.
  • u8x16_popcnttarget_family="wasm" and simd128
    Count the number of bits set to one within each lane.
  • u8x16_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 16 packed u8 numbers.
  • u8x16_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u8x16_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u8x16_shuffletarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.
  • u8x16_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u8x16_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.
  • u8x16_sub_sattarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to 0.
  • u8x16_swizzletarget_family="wasm" and simd128
    Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.
  • u16x8target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u16x8_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit integers.
  • u16x8_add_sattarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to u16::MAX.
  • u16x8_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u16x8_avgrtarget_family="wasm" and simd128
    Lane-wise rounding average.
  • u16x8_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u16x8_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • u16x8_extadd_pairwise_u8x16target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • u16x8_extend_high_u8x16target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u16x8_extend_low_u8x16target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u16x8_extmul_high_u8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u16x8_extmul_low_u8x16target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u16x8_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 8 packed u16 numbers.
  • u16x8_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_load_extend_u8x8target_family="wasm" and simd128
    Load eight 8-bit integers and zero extend each one to a 16-bit lane
  • u16x8_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.
  • u16x8_maxtarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u16x8_mintarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u16x8_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed eight 16-bit signed integers.
  • u16x8_narrow_i32x4target_family="wasm" and simd128
    Converts two input vectors into a smaller lane vector by narrowing each lane.
  • u16x8_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.
  • u16x8_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 8 packed u16 numbers.
  • u16x8_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u16x8_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u16x8_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.
  • u16x8_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u16x8_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.
  • u16x8_sub_sattarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to 0.
  • u32x4target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u32x4_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed four 32-bit integers.
  • u32x4_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u32x4_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u32x4_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • u32x4_extadd_pairwise_u16x8target_family="wasm" and simd128
    Integer extended pairwise addition producing extended results (twice wider results than the inputs).
  • u32x4_extend_high_u16x8target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u32x4_extend_low_u16x8target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u32x4_extmul_high_u16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u32x4_extmul_low_u16x8target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u32x4_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 4 packed u32 numbers.
  • u32x4_getarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_gttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_letarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_load_extend_u16x4target_family="wasm" and simd128
    Load four 16-bit integers and zero extend each one to a 32-bit lane
  • u32x4_lttarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.
  • u32x4_maxtarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the maximum of each pair.
  • u32x4_mintarget_family="wasm" and simd128
    Compares lane-wise unsigned integers, and returns the minimum of each pair.
  • u32x4_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.
  • u32x4_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.
  • u32x4_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 4 packed u32 numbers.
  • u32x4_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u32x4_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u32x4_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were four 32-bit integers, only taking 4 indices to shuffle.
  • u32x4_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u32x4_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.
  • u32x4_trunc_sat_f32x4target_family="wasm" and simd128
    Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit unsigned integers.
  • u32x4_trunc_sat_f64x2_zerotarget_family="wasm" and simd128
    Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.
  • u64x2target_family="wasm"
    Materializes a SIMD value from the provided operands.
  • u64x2_addtarget_family="wasm" and simd128
    Adds two 128-bit vectors as if they were two packed two 64-bit integers.
  • u64x2_all_truetarget_family="wasm" and simd128
    Returns true if all lanes are non-zero, false otherwise.
  • u64x2_bitmasktarget_family="wasm" and simd128
    Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.
  • u64x2_eqtarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • u64x2_extend_high_u32x4target_family="wasm" and simd128
    Converts high half of the smaller lane vector to a larger lane vector, zero extended.
  • u64x2_extend_low_u32x4target_family="wasm" and simd128
    Converts low half of the smaller lane vector to a larger lane vector, zero extended.
  • u64x2_extmul_high_u32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u64x2_extmul_low_u32x4target_family="wasm" and simd128
    Lane-wise integer extended multiplication producing twice wider result than the inputs.
  • u64x2_extract_lanetarget_family="wasm" and simd128
    Extracts a lane from a 128-bit vector interpreted as 2 packed u64 numbers.
  • u64x2_load_extend_u32x2target_family="wasm" and simd128
    Load two 32-bit integers and zero extend each one to a 64-bit lane
  • u64x2_multarget_family="wasm" and simd128
    Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.
  • u64x2_netarget_family="wasm" and simd128
    Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit integers.
  • u64x2_replace_lanetarget_family="wasm" and simd128
    Replaces a lane from a 128-bit vector interpreted as 2 packed u64 numbers.
  • u64x2_shltarget_family="wasm" and simd128
    Shifts each lane to the left by the specified number of bits.
  • u64x2_shrtarget_family="wasm" and simd128
    Shifts each lane to the right by the specified number of bits, shifting in zeros.
  • u64x2_shuffletarget_family="wasm" and simd128
    Same as i8x16_shuffle, except operates as if the inputs were two 64-bit integers, only taking 2 indices to shuffle.
  • u64x2_splattarget_family="wasm" and simd128
    Creates a vector with identical lanes.
  • u64x2_subtarget_family="wasm" and simd128
    Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.
  • unreachabletarget_family="wasm"
    Generates the unreachable instruction, which causes an unconditional trap.
  • v128_andtarget_family="wasm" and simd128
    Performs a bitwise and of the two input 128-bit vectors, returning the resulting vector.
  • v128_andnottarget_family="wasm" and simd128
    Bitwise AND of bits of a and the logical inverse of bits of b.
  • v128_any_truetarget_family="wasm" and simd128
    Returns true if any bit in a is set, or false otherwise.
  • v128_bitselecttarget_family="wasm" and simd128
    Use the bitmask in c to select bits from v1 when 1 and v2 when 0.
  • v128_loadtarget_family="wasm" and simd128
    Loads a v128 vector from the given heap address.
  • v128_load8_lanetarget_family="wasm" and simd128
    Loads an 8-bit value from m and sets lane L of v to that value.
  • v128_load8_splattarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load16_lanetarget_family="wasm" and simd128
    Loads a 16-bit value from m and sets lane L of v to that value.
  • v128_load16_splattarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load32_lanetarget_family="wasm" and simd128
    Loads a 32-bit value from m and sets lane L of v to that value.
  • v128_load32_splattarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load32_zerotarget_family="wasm" and simd128
    Load a 32-bit element into the low bits of the vector and sets all other bits to zero.
  • v128_load64_lanetarget_family="wasm" and simd128
    Loads a 64-bit value from m and sets lane L of v to that value.
  • v128_load64_splattarget_family="wasm" and simd128
    Load a single element and splat to all lanes of a v128 vector.
  • v128_load64_zerotarget_family="wasm" and simd128
    Load a 64-bit element into the low bits of the vector and sets all other bits to zero.
  • v128_nottarget_family="wasm" and simd128
    Flips each bit of the 128-bit input vector.
  • v128_ortarget_family="wasm" and simd128
    Performs a bitwise or of the two input 128-bit vectors, returning the resulting vector.
  • v128_storetarget_family="wasm" and simd128
    Stores a v128 vector to the given heap address.
  • v128_store8_lanetarget_family="wasm" and simd128
    Stores the 8-bit value from lane L of v into m
  • v128_store16_lanetarget_family="wasm" and simd128
    Stores the 16-bit value from lane L of v into m
  • v128_store32_lanetarget_family="wasm" and simd128
    Stores the 32-bit value from lane L of v into m
  • v128_store64_lanetarget_family="wasm" and simd128
    Stores the 64-bit value from lane L of v into m
  • v128_xortarget_family="wasm" and simd128
    Performs a bitwise xor of the two input 128-bit vectors, returning the resulting vector.
This documentation is an old archive. Please see https://rust.docs.kernel.org instead.