Module core::arch::x86

1.27.0 · source · []
Available on x86 only.
Expand description

Platform-specific intrinsics for the x86 platform.

See the module documentation for more details.

Structs

__m128bhExperimentalx86 or x86-64
128-bit wide set of eight ‘u16’ types, x86-specific
__m256bhExperimentalx86 or x86-64
256-bit wide set of 16 ‘u16’ types, x86-specific
__m512Experimentalx86 or x86-64
512-bit wide set of sixteen f32 types, x86-specific
__m512bhExperimentalx86 or x86-64
512-bit wide set of 32 ‘u16’ types, x86-specific
__m512dExperimentalx86 or x86-64
512-bit wide set of eight f64 types, x86-specific
__m512iExperimentalx86 or x86-64
512-bit wide integer vector type, x86-specific
CpuidResultx86 or x86-64
Result of the cpuid instruction.
__m128x86 or x86-64
128-bit wide set of four f32 types, x86-specific
__m128dx86 or x86-64
128-bit wide set of two f64 types, x86-specific
__m128ix86 or x86-64
128-bit wide integer vector type, x86-specific
__m256x86 or x86-64
256-bit wide set of eight f32 types, x86-specific
__m256dx86 or x86-64
256-bit wide set of four f64 types, x86-specific
__m256ix86 or x86-64
256-bit wide integer vector type, x86-specific

Constants

_MM_CMPINT_EQExperimentalx86 or x86-64
Equal
_MM_CMPINT_FALSEExperimentalx86 or x86-64
False
_MM_CMPINT_LEExperimentalx86 or x86-64
Less-than-or-equal
_MM_CMPINT_LTExperimentalx86 or x86-64
Less-than
_MM_CMPINT_NEExperimentalx86 or x86-64
Not-equal
_MM_CMPINT_NLEExperimentalx86 or x86-64
Not less-than-or-equal
_MM_CMPINT_NLTExperimentalx86 or x86-64
Not less-than
_MM_CMPINT_TRUEExperimentalx86 or x86-64
True
_MM_MANT_NORM_1_2Experimentalx86 or x86-64
interval [1, 2)
_MM_MANT_NORM_P5_1Experimentalx86 or x86-64
interval [0.5, 1)
_MM_MANT_NORM_P5_2Experimentalx86 or x86-64
interval [0.5, 2)
_MM_MANT_NORM_P75_1P5Experimentalx86 or x86-64
interval [0.75, 1.5)
_MM_MANT_SIGN_NANExperimentalx86 or x86-64
DEST = NaN if sign(SRC) = 1
_MM_MANT_SIGN_SRCExperimentalx86 or x86-64
sign = sign(SRC)
_MM_MANT_SIGN_ZEROExperimentalx86 or x86-64
sign = 0
_MM_PERM_AAAAExperimentalx86 or x86-64
_MM_PERM_AAABExperimentalx86 or x86-64
_MM_PERM_AAACExperimentalx86 or x86-64
_MM_PERM_AAADExperimentalx86 or x86-64
_MM_PERM_AABAExperimentalx86 or x86-64
_MM_PERM_AABBExperimentalx86 or x86-64
_MM_PERM_AABCExperimentalx86 or x86-64
_MM_PERM_AABDExperimentalx86 or x86-64
_MM_PERM_AACAExperimentalx86 or x86-64
_MM_PERM_AACBExperimentalx86 or x86-64
_MM_PERM_AACCExperimentalx86 or x86-64
_MM_PERM_AACDExperimentalx86 or x86-64
_MM_PERM_AADAExperimentalx86 or x86-64
_MM_PERM_AADBExperimentalx86 or x86-64
_MM_PERM_AADCExperimentalx86 or x86-64
_MM_PERM_AADDExperimentalx86 or x86-64
_MM_PERM_ABAAExperimentalx86 or x86-64
_MM_PERM_ABABExperimentalx86 or x86-64
_MM_PERM_ABACExperimentalx86 or x86-64
_MM_PERM_ABADExperimentalx86 or x86-64
_MM_PERM_ABBAExperimentalx86 or x86-64
_MM_PERM_ABBBExperimentalx86 or x86-64
_MM_PERM_ABBCExperimentalx86 or x86-64
_MM_PERM_ABBDExperimentalx86 or x86-64
_MM_PERM_ABCAExperimentalx86 or x86-64
_MM_PERM_ABCBExperimentalx86 or x86-64
_MM_PERM_ABCCExperimentalx86 or x86-64
_MM_PERM_ABCDExperimentalx86 or x86-64
_MM_PERM_ABDAExperimentalx86 or x86-64
_MM_PERM_ABDBExperimentalx86 or x86-64
_MM_PERM_ABDCExperimentalx86 or x86-64
_MM_PERM_ABDDExperimentalx86 or x86-64
_MM_PERM_ACAAExperimentalx86 or x86-64
_MM_PERM_ACABExperimentalx86 or x86-64
_MM_PERM_ACACExperimentalx86 or x86-64
_MM_PERM_ACADExperimentalx86 or x86-64
_MM_PERM_ACBAExperimentalx86 or x86-64
_MM_PERM_ACBBExperimentalx86 or x86-64
_MM_PERM_ACBCExperimentalx86 or x86-64
_MM_PERM_ACBDExperimentalx86 or x86-64
_MM_PERM_ACCAExperimentalx86 or x86-64
_MM_PERM_ACCBExperimentalx86 or x86-64
_MM_PERM_ACCCExperimentalx86 or x86-64
_MM_PERM_ACCDExperimentalx86 or x86-64
_MM_PERM_ACDAExperimentalx86 or x86-64
_MM_PERM_ACDBExperimentalx86 or x86-64
_MM_PERM_ACDCExperimentalx86 or x86-64
_MM_PERM_ACDDExperimentalx86 or x86-64
_MM_PERM_ADAAExperimentalx86 or x86-64
_MM_PERM_ADABExperimentalx86 or x86-64
_MM_PERM_ADACExperimentalx86 or x86-64
_MM_PERM_ADADExperimentalx86 or x86-64
_MM_PERM_ADBAExperimentalx86 or x86-64
_MM_PERM_ADBBExperimentalx86 or x86-64
_MM_PERM_ADBCExperimentalx86 or x86-64
_MM_PERM_ADBDExperimentalx86 or x86-64
_MM_PERM_ADCAExperimentalx86 or x86-64
_MM_PERM_ADCBExperimentalx86 or x86-64
_MM_PERM_ADCCExperimentalx86 or x86-64
_MM_PERM_ADCDExperimentalx86 or x86-64
_MM_PERM_ADDAExperimentalx86 or x86-64
_MM_PERM_ADDBExperimentalx86 or x86-64
_MM_PERM_ADDCExperimentalx86 or x86-64
_MM_PERM_ADDDExperimentalx86 or x86-64
_MM_PERM_BAAAExperimentalx86 or x86-64
_MM_PERM_BAABExperimentalx86 or x86-64
_MM_PERM_BAACExperimentalx86 or x86-64
_MM_PERM_BAADExperimentalx86 or x86-64
_MM_PERM_BABAExperimentalx86 or x86-64
_MM_PERM_BABBExperimentalx86 or x86-64
_MM_PERM_BABCExperimentalx86 or x86-64
_MM_PERM_BABDExperimentalx86 or x86-64
_MM_PERM_BACAExperimentalx86 or x86-64
_MM_PERM_BACBExperimentalx86 or x86-64
_MM_PERM_BACCExperimentalx86 or x86-64
_MM_PERM_BACDExperimentalx86 or x86-64
_MM_PERM_BADAExperimentalx86 or x86-64
_MM_PERM_BADBExperimentalx86 or x86-64
_MM_PERM_BADCExperimentalx86 or x86-64
_MM_PERM_BADDExperimentalx86 or x86-64
_MM_PERM_BBAAExperimentalx86 or x86-64
_MM_PERM_BBABExperimentalx86 or x86-64
_MM_PERM_BBACExperimentalx86 or x86-64
_MM_PERM_BBADExperimentalx86 or x86-64
_MM_PERM_BBBAExperimentalx86 or x86-64
_MM_PERM_BBBBExperimentalx86 or x86-64
_MM_PERM_BBBCExperimentalx86 or x86-64
_MM_PERM_BBBDExperimentalx86 or x86-64
_MM_PERM_BBCAExperimentalx86 or x86-64
_MM_PERM_BBCBExperimentalx86 or x86-64
_MM_PERM_BBCCExperimentalx86 or x86-64
_MM_PERM_BBCDExperimentalx86 or x86-64
_MM_PERM_BBDAExperimentalx86 or x86-64
_MM_PERM_BBDBExperimentalx86 or x86-64
_MM_PERM_BBDCExperimentalx86 or x86-64
_MM_PERM_BBDDExperimentalx86 or x86-64
_MM_PERM_BCAAExperimentalx86 or x86-64
_MM_PERM_BCABExperimentalx86 or x86-64
_MM_PERM_BCACExperimentalx86 or x86-64
_MM_PERM_BCADExperimentalx86 or x86-64
_MM_PERM_BCBAExperimentalx86 or x86-64
_MM_PERM_BCBBExperimentalx86 or x86-64
_MM_PERM_BCBCExperimentalx86 or x86-64
_MM_PERM_BCBDExperimentalx86 or x86-64
_MM_PERM_BCCAExperimentalx86 or x86-64
_MM_PERM_BCCBExperimentalx86 or x86-64
_MM_PERM_BCCCExperimentalx86 or x86-64
_MM_PERM_BCCDExperimentalx86 or x86-64
_MM_PERM_BCDAExperimentalx86 or x86-64
_MM_PERM_BCDBExperimentalx86 or x86-64
_MM_PERM_BCDCExperimentalx86 or x86-64
_MM_PERM_BCDDExperimentalx86 or x86-64
_MM_PERM_BDAAExperimentalx86 or x86-64
_MM_PERM_BDABExperimentalx86 or x86-64
_MM_PERM_BDACExperimentalx86 or x86-64
_MM_PERM_BDADExperimentalx86 or x86-64
_MM_PERM_BDBAExperimentalx86 or x86-64
_MM_PERM_BDBBExperimentalx86 or x86-64
_MM_PERM_BDBCExperimentalx86 or x86-64
_MM_PERM_BDBDExperimentalx86 or x86-64
_MM_PERM_BDCAExperimentalx86 or x86-64
_MM_PERM_BDCBExperimentalx86 or x86-64
_MM_PERM_BDCCExperimentalx86 or x86-64
_MM_PERM_BDCDExperimentalx86 or x86-64
_MM_PERM_BDDAExperimentalx86 or x86-64
_MM_PERM_BDDBExperimentalx86 or x86-64
_MM_PERM_BDDCExperimentalx86 or x86-64
_MM_PERM_BDDDExperimentalx86 or x86-64
_MM_PERM_CAAAExperimentalx86 or x86-64
_MM_PERM_CAABExperimentalx86 or x86-64
_MM_PERM_CAACExperimentalx86 or x86-64
_MM_PERM_CAADExperimentalx86 or x86-64
_MM_PERM_CABAExperimentalx86 or x86-64
_MM_PERM_CABBExperimentalx86 or x86-64
_MM_PERM_CABCExperimentalx86 or x86-64
_MM_PERM_CABDExperimentalx86 or x86-64
_MM_PERM_CACAExperimentalx86 or x86-64
_MM_PERM_CACBExperimentalx86 or x86-64
_MM_PERM_CACCExperimentalx86 or x86-64
_MM_PERM_CACDExperimentalx86 or x86-64
_MM_PERM_CADAExperimentalx86 or x86-64
_MM_PERM_CADBExperimentalx86 or x86-64
_MM_PERM_CADCExperimentalx86 or x86-64
_MM_PERM_CADDExperimentalx86 or x86-64
_MM_PERM_CBAAExperimentalx86 or x86-64
_MM_PERM_CBABExperimentalx86 or x86-64
_MM_PERM_CBACExperimentalx86 or x86-64
_MM_PERM_CBADExperimentalx86 or x86-64
_MM_PERM_CBBAExperimentalx86 or x86-64
_MM_PERM_CBBBExperimentalx86 or x86-64
_MM_PERM_CBBCExperimentalx86 or x86-64
_MM_PERM_CBBDExperimentalx86 or x86-64
_MM_PERM_CBCAExperimentalx86 or x86-64
_MM_PERM_CBCBExperimentalx86 or x86-64
_MM_PERM_CBCCExperimentalx86 or x86-64
_MM_PERM_CBCDExperimentalx86 or x86-64
_MM_PERM_CBDAExperimentalx86 or x86-64
_MM_PERM_CBDBExperimentalx86 or x86-64
_MM_PERM_CBDCExperimentalx86 or x86-64
_MM_PERM_CBDDExperimentalx86 or x86-64
_MM_PERM_CCAAExperimentalx86 or x86-64
_MM_PERM_CCABExperimentalx86 or x86-64
_MM_PERM_CCACExperimentalx86 or x86-64
_MM_PERM_CCADExperimentalx86 or x86-64
_MM_PERM_CCBAExperimentalx86 or x86-64
_MM_PERM_CCBBExperimentalx86 or x86-64
_MM_PERM_CCBCExperimentalx86 or x86-64
_MM_PERM_CCBDExperimentalx86 or x86-64
_MM_PERM_CCCAExperimentalx86 or x86-64
_MM_PERM_CCCBExperimentalx86 or x86-64
_MM_PERM_CCCCExperimentalx86 or x86-64
_MM_PERM_CCCDExperimentalx86 or x86-64
_MM_PERM_CCDAExperimentalx86 or x86-64
_MM_PERM_CCDBExperimentalx86 or x86-64
_MM_PERM_CCDCExperimentalx86 or x86-64
_MM_PERM_CCDDExperimentalx86 or x86-64
_MM_PERM_CDAAExperimentalx86 or x86-64
_MM_PERM_CDABExperimentalx86 or x86-64
_MM_PERM_CDACExperimentalx86 or x86-64
_MM_PERM_CDADExperimentalx86 or x86-64
_MM_PERM_CDBAExperimentalx86 or x86-64
_MM_PERM_CDBBExperimentalx86 or x86-64
_MM_PERM_CDBCExperimentalx86 or x86-64
_MM_PERM_CDBDExperimentalx86 or x86-64
_MM_PERM_CDCAExperimentalx86 or x86-64
_MM_PERM_CDCBExperimentalx86 or x86-64
_MM_PERM_CDCCExperimentalx86 or x86-64
_MM_PERM_CDCDExperimentalx86 or x86-64
_MM_PERM_CDDAExperimentalx86 or x86-64
_MM_PERM_CDDBExperimentalx86 or x86-64
_MM_PERM_CDDCExperimentalx86 or x86-64
_MM_PERM_CDDDExperimentalx86 or x86-64
_MM_PERM_DAAAExperimentalx86 or x86-64
_MM_PERM_DAABExperimentalx86 or x86-64
_MM_PERM_DAACExperimentalx86 or x86-64
_MM_PERM_DAADExperimentalx86 or x86-64
_MM_PERM_DABAExperimentalx86 or x86-64
_MM_PERM_DABBExperimentalx86 or x86-64
_MM_PERM_DABCExperimentalx86 or x86-64
_MM_PERM_DABDExperimentalx86 or x86-64
_MM_PERM_DACAExperimentalx86 or x86-64
_MM_PERM_DACBExperimentalx86 or x86-64
_MM_PERM_DACCExperimentalx86 or x86-64
_MM_PERM_DACDExperimentalx86 or x86-64
_MM_PERM_DADAExperimentalx86 or x86-64
_MM_PERM_DADBExperimentalx86 or x86-64
_MM_PERM_DADCExperimentalx86 or x86-64
_MM_PERM_DADDExperimentalx86 or x86-64
_MM_PERM_DBAAExperimentalx86 or x86-64
_MM_PERM_DBABExperimentalx86 or x86-64
_MM_PERM_DBACExperimentalx86 or x86-64
_MM_PERM_DBADExperimentalx86 or x86-64
_MM_PERM_DBBAExperimentalx86 or x86-64
_MM_PERM_DBBBExperimentalx86 or x86-64
_MM_PERM_DBBCExperimentalx86 or x86-64
_MM_PERM_DBBDExperimentalx86 or x86-64
_MM_PERM_DBCAExperimentalx86 or x86-64
_MM_PERM_DBCBExperimentalx86 or x86-64
_MM_PERM_DBCCExperimentalx86 or x86-64
_MM_PERM_DBCDExperimentalx86 or x86-64
_MM_PERM_DBDAExperimentalx86 or x86-64
_MM_PERM_DBDBExperimentalx86 or x86-64
_MM_PERM_DBDCExperimentalx86 or x86-64
_MM_PERM_DBDDExperimentalx86 or x86-64
_MM_PERM_DCAAExperimentalx86 or x86-64
_MM_PERM_DCABExperimentalx86 or x86-64
_MM_PERM_DCACExperimentalx86 or x86-64
_MM_PERM_DCADExperimentalx86 or x86-64
_MM_PERM_DCBAExperimentalx86 or x86-64
_MM_PERM_DCBBExperimentalx86 or x86-64
_MM_PERM_DCBCExperimentalx86 or x86-64
_MM_PERM_DCBDExperimentalx86 or x86-64
_MM_PERM_DCCAExperimentalx86 or x86-64
_MM_PERM_DCCBExperimentalx86 or x86-64
_MM_PERM_DCCCExperimentalx86 or x86-64
_MM_PERM_DCCDExperimentalx86 or x86-64
_MM_PERM_DCDAExperimentalx86 or x86-64
_MM_PERM_DCDBExperimentalx86 or x86-64
_MM_PERM_DCDCExperimentalx86 or x86-64
_MM_PERM_DCDDExperimentalx86 or x86-64
_MM_PERM_DDAAExperimentalx86 or x86-64
_MM_PERM_DDABExperimentalx86 or x86-64
_MM_PERM_DDACExperimentalx86 or x86-64
_MM_PERM_DDADExperimentalx86 or x86-64
_MM_PERM_DDBAExperimentalx86 or x86-64
_MM_PERM_DDBBExperimentalx86 or x86-64
_MM_PERM_DDBCExperimentalx86 or x86-64
_MM_PERM_DDBDExperimentalx86 or x86-64
_MM_PERM_DDCAExperimentalx86 or x86-64
_MM_PERM_DDCBExperimentalx86 or x86-64
_MM_PERM_DDCCExperimentalx86 or x86-64
_MM_PERM_DDCDExperimentalx86 or x86-64
_MM_PERM_DDDAExperimentalx86 or x86-64
_MM_PERM_DDDBExperimentalx86 or x86-64
_MM_PERM_DDDCExperimentalx86 or x86-64
_MM_PERM_DDDDExperimentalx86 or x86-64
_XABORT_CAPACITYExperimentalx86 or x86-64
Transaction abort due to the transaction using too much memory.
_XABORT_CONFLICTExperimentalx86 or x86-64
Transaction abort due to a memory conflict with another thread.
_XABORT_DEBUGExperimentalx86 or x86-64
Transaction abort due to a debug trap.
_XABORT_EXPLICITExperimentalx86 or x86-64
Transaction explicitly aborted with xabort. The parameter passed to xabort is available with _xabort_code(status).
_XABORT_NESTEDExperimentalx86 or x86-64
Transaction abort in a inner nested transaction.
_XABORT_RETRYExperimentalx86 or x86-64
Transaction retry is possible.
_XBEGIN_STARTEDExperimentalx86 or x86-64
Transaction successfully started.
_CMP_EQ_OQx86 or x86-64
Equal (ordered, non-signaling)
_CMP_EQ_OSx86 or x86-64
Equal (ordered, signaling)
_CMP_EQ_UQx86 or x86-64
Equal (unordered, non-signaling)
_CMP_EQ_USx86 or x86-64
Equal (unordered, signaling)
_CMP_FALSE_OQx86 or x86-64
False (ordered, non-signaling)
_CMP_FALSE_OSx86 or x86-64
False (ordered, signaling)
_CMP_GE_OQx86 or x86-64
Greater-than-or-equal (ordered, non-signaling)
_CMP_GE_OSx86 or x86-64
Greater-than-or-equal (ordered, signaling)
_CMP_GT_OQx86 or x86-64
Greater-than (ordered, non-signaling)
_CMP_GT_OSx86 or x86-64
Greater-than (ordered, signaling)
_CMP_LE_OQx86 or x86-64
Less-than-or-equal (ordered, non-signaling)
_CMP_LE_OSx86 or x86-64
Less-than-or-equal (ordered, signaling)
_CMP_LT_OQx86 or x86-64
Less-than (ordered, non-signaling)
_CMP_LT_OSx86 or x86-64
Less-than (ordered, signaling)
_CMP_NEQ_OQx86 or x86-64
Not-equal (ordered, non-signaling)
_CMP_NEQ_OSx86 or x86-64
Not-equal (ordered, signaling)
_CMP_NEQ_UQx86 or x86-64
Not-equal (unordered, non-signaling)
_CMP_NEQ_USx86 or x86-64
Not-equal (unordered, signaling)
_CMP_NGE_UQx86 or x86-64
Not-greater-than-or-equal (unordered, non-signaling)
_CMP_NGE_USx86 or x86-64
Not-greater-than-or-equal (unordered, signaling)
_CMP_NGT_UQx86 or x86-64
Not-greater-than (unordered, non-signaling)
_CMP_NGT_USx86 or x86-64
Not-greater-than (unordered, signaling)
_CMP_NLE_UQx86 or x86-64
Not-less-than-or-equal (unordered, non-signaling)
_CMP_NLE_USx86 or x86-64
Not-less-than-or-equal (unordered, signaling)
_CMP_NLT_UQx86 or x86-64
Not-less-than (unordered, non-signaling)
_CMP_NLT_USx86 or x86-64
Not-less-than (unordered, signaling)
_CMP_ORD_Qx86 or x86-64
Ordered (non-signaling)
_CMP_ORD_Sx86 or x86-64
Ordered (signaling)
_CMP_TRUE_UQx86 or x86-64
True (unordered, non-signaling)
_CMP_TRUE_USx86 or x86-64
True (unordered, signaling)
_CMP_UNORD_Qx86 or x86-64
Unordered (non-signaling)
_CMP_UNORD_Sx86 or x86-64
Unordered (signaling)
_MM_FROUND_CEILx86 or x86-64
round up and do not suppress exceptions
use MXCSR.RC; see vendor::_MM_SET_ROUNDING_MODE
_MM_FROUND_FLOORx86 or x86-64
round down and do not suppress exceptions
use MXCSR.RC and suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE
_MM_FROUND_NINTx86 or x86-64
round to nearest and do not suppress exceptions
_MM_FROUND_NO_EXCx86 or x86-64
suppress exceptions
do not suppress exceptions
_MM_FROUND_RINTx86 or x86-64
use MXCSR.RC and do not suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE
round to nearest
round down
round up
_MM_FROUND_TO_ZEROx86 or x86-64
truncate
_MM_FROUND_TRUNCx86 or x86-64
truncate and do not suppress exceptions
_MM_HINT_ET0x86 or x86-64
_MM_HINT_ET1x86 or x86-64
_MM_HINT_NTAx86 or x86-64
_MM_HINT_T0x86 or x86-64
_MM_HINT_T1x86 or x86-64
_MM_HINT_T2x86 or x86-64
_MM_MASK_DENORMx86 or x86-64
_MM_ROUND_DOWNx86 or x86-64
_MM_ROUND_UPx86 or x86-64
_SIDD_BIT_MASKx86 or x86-64
Mask only: return the bit mask
_SIDD_CMP_EQUAL_ANYx86 or x86-64
For each character in a, find if it is in b (Default)
The strings defined by a and b are equal
Search for the defined substring in the target
_SIDD_CMP_RANGESx86 or x86-64
For each character in a, determine if b[0] <= c <= b[1] or b[1] <= c <= b[2]...
Index only: return the least significant bit (Default)
Negates results only before the end of the string
Do not negate results before the end of the string
Index only: return the most significant bit
Negates results
Do not negate results (Default)
_SIDD_SBYTE_OPSx86 or x86-64
String contains signed 8-bit characters
_SIDD_SWORD_OPSx86 or x86-64
String contains unsigned 16-bit characters
_SIDD_UBYTE_OPSx86 or x86-64
String contains unsigned 8-bit characters (Default)
_SIDD_UNIT_MASKx86 or x86-64
Mask only: return the byte mask
_SIDD_UWORD_OPSx86 or x86-64
String contains unsigned 16-bit characters
XFEATURE_ENABLED_MASK for XCR

Functions

_MM_SHUFFLEExperimentalx86 or x86-64
A utility function for creating masks to use with Intel shuffle and permute intrinsics.
_kadd_mask32Experimental(x86 or x86-64) and avx512bw
Add 32-bit masks in a and b, and store the result in k.
_kadd_mask64Experimental(x86 or x86-64) and avx512bw
Add 64-bit masks in a and b, and store the result in k.
_kand_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise AND of 16-bit masks a and b, and store the result in k.
_kand_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise AND of 32-bit masks a and b, and store the result in k.
_kand_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise AND of 64-bit masks a and b, and store the result in k.
_kandn_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k.
_kandn_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise NOT of 32-bit masks a and then AND with b, and store the result in k.
_kandn_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise NOT of 64-bit masks a and then AND with b, and store the result in k.
_knot_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 16-bit mask a, and store the result in k.
_knot_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise NOT of 32-bit mask a, and store the result in k.
_knot_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise NOT of 64-bit mask a, and store the result in k.
_kor_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise OR of 16-bit masks a and b, and store the result in k.
_kor_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise OR of 32-bit masks a and b, and store the result in k.
_kor_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise OR of 64-bit masks a and b, and store the result in k.
_kxnor_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise XNOR of 16-bit masks a and b, and store the result in k.
_kxnor_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise XNOR of 32-bit masks a and b, and store the result in k.
_kxnor_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise XNOR of 64-bit masks a and b, and store the result in k.
_kxor_mask16Experimental(x86 or x86-64) and avx512f
Compute the bitwise XOR of 16-bit masks a and b, and store the result in k.
_kxor_mask32Experimental(x86 or x86-64) and avx512bw
Compute the bitwise XOR of 32-bit masks a and b, and store the result in k.
_kxor_mask64Experimental(x86 or x86-64) and avx512bw
Compute the bitwise XOR of 64-bit masks a and b, and store the result in k.
_load_mask32Experimental(x86 or x86-64) and avx512bw
Load 32-bit mask from memory into k.
_load_mask64Experimental(x86 or x86-64) and avx512bw
Load 64-bit mask from memory into k.
_mm256_abs_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst.
_mm256_aesdec_epi128Experimental(x86 or x86-64) and avx512vaes,avx512vl
Performs one round of an AES decryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm256_aesdeclast_epi128Experimental(x86 or x86-64) and avx512vaes,avx512vl
Performs the last round of an AES decryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm256_aesenc_epi128Experimental(x86 or x86-64) and avx512vaes,avx512vl
Performs one round of an AES encryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm256_aesenclast_epi128Experimental(x86 or x86-64) and avx512vaes,avx512vl
Performs the last round of an AES encryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm256_alignr_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 32 bytes (8 elements) in dst.
_mm256_alignr_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 32 bytes (4 elements) in dst.
_mm256_bitshuffle_epi64_maskExperimental(x86 or x86-64) and avx512bitalg,avx512vl
Considers the input b as packed 64-bit integers and c as packed 8-bit integers. Then groups 8 8-bit values from cas indices into the the bits of the corresponding 64-bit integer. It then selects these bits and packs them into the output.
_mm256_broadcast_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst.
_mm256_broadcast_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed 32-bit integers from a to all elements of dst.
_mm256_broadcastmb_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Broadcast the low 8-bits from input mask k to all 64-bit elements of dst.
_mm256_broadcastmw_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Broadcast the low 16-bits from input mask k to all 32-bit elements of dst.
_mm256_clmulepi64_epi128Experimental(x86 or x86-64) and avx512vpclmulqdq,avx512vl
Performs a carry-less multiplication of two 64-bit polynomials over the finite field GF(2^k) - in each of the 2 128-bit lanes.
_mm256_cmp_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_pd_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmp_ps_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm256_cmpeq_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 32-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 64-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpeq_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for equality, and store the results in mask vector k.
_mm256_cmpge_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpge_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm256_cmpgt_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmpgt_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm256_cmple_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmple_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm256_cmplt_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmplt_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in mask vector k.
_mm256_cmpneq_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 32-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_cmpneq_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm256_conflict_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
_mm256_conflict_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
_mm256_cvtepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm256_cvtepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm256_cvtepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
_mm256_cvtepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm256_cvtepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
_mm256_cvtepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst.
_mm256_cvtepu32_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
_mm256_cvtne2ps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in two 256-bit vectors a and b to packed BF16 (16-bit) floating-point elements, and store the results in a 256-bit wide vector. Intel’s documentation
_mm256_cvtneps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst. Intel’s documentation
_mm256_cvtpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm256_cvtph_psExperimental(x86 or x86-64) and f16c
Converts the 8 x 16-bit half-precision float values in the 128-bit vector a into 8 x 32-bit float values stored in a 256-bit wide vector.
_mm256_cvtps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm256_cvtps_phExperimental(x86 or x86-64) and f16c
Converts the 8 x 32-bit float values in the 256-bit vector a into 8 x 16-bit half-precision float values stored in a 128-bit wide vector.
_mm256_cvtsepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm256_cvtsepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm256_cvtsepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
_mm256_cvtsepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm256_cvtsepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
_mm256_cvtsepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst.
_mm256_cvttpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
_mm256_cvttps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
_mm256_cvtusepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm256_cvtusepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm256_cvtusepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
_mm256_cvtusepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm256_cvtusepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
_mm256_cvtusepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst.
_mm256_dbsad_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst. Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
_mm256_dpbf16_psExperimental(x86 or x86-64) and avx512bf16,avx512vl
Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst. Intel’s documentation
_mm256_dpbusd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
_mm256_dpbusds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
_mm256_dpwssd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
_mm256_dpwssds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
_mm256_extractf32x4_psExperimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the result in dst.
_mm256_extracti32x4_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM1, and store the result in dst.
_mm256_fixupimm_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm256_fixupimm_psExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm256_getexp_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_getexp_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_getmant_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_getmant_psExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_gf2p8affine_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_gf2p8affineinv_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_gf2p8mul_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
_mm256_insertf32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to dst, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into dst at the location specified by imm8.
_mm256_inserti32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to dst, then insert 128 bits (composed of 4 packed 32-bit integers) from b into dst at the location specified by imm8.
_mm256_load_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Load 256-bits (composed of 8 packed 32-bit integers) from memory into dst. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_load_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Load 256-bits (composed of 4 packed 64-bit integers) from memory into dst. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_loadu_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Load 256-bits (composed of 32 packed 8-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
_mm256_loadu_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Load 256-bits (composed of 16 packed 16-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
_mm256_loadu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Load 256-bits (composed of 8 packed 32-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
_mm256_loadu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Load 256-bits (composed of 4 packed 64-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
_mm256_lzcnt_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 32-bit integer in a, and store the results in dst.
_mm256_lzcnt_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 64-bit integer in a, and store the results in dst.
_mm256_madd52hi_epu64Experimental(x86 or x86-64) and avx512ifma,avx512vl
Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.
_mm256_madd52lo_epu64Experimental(x86 or x86-64) and avx512ifma,avx512vl
Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.
_mm256_mask2_permutex2var_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask2_permutex2var_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
_mm256_mask2_permutex2var_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
_mm256_mask2_permutex2var_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
_mm256_mask2_permutex2var_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set)
_mm256_mask2_permutex2var_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from idx when the corresponding mask bit is not set).
_mm256_mask3_fmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmaddsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmaddsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmsubadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fmsubadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fnmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fnmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fnmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask3_fnmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from c when the corresponding mask bit is not set).
_mm256_mask_abs_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the absolute value of packed signed 8-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_abs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the absolute value of packed signed 16-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_abs_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the absolute value of packed signed 32-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_abs_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed 8-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed 16-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Add packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Add packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_add_psExperimental(x86 or x86-64) and avx512f,avx512vl
Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_adds_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed signed 8-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_adds_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed signed 16-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_adds_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed unsigned 8-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_adds_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed unsigned 16-bit integers in a and b using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_alignr_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Concatenate pairs of 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by imm8 bytes, and store the low 16 bytes in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_alignr_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 32 bytes (8 elements) in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_alignr_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 32 bytes (4 elements) in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_and_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Performs element-by-element bitwise AND between packed 32-bit integer elements of a and b, storing the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_and_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_andnot_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NOT of packed 32-bit integers in a and then AND with b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_andnot_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NOT of packed 64-bit integers in a and then AND with b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_avg_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Average packed unsigned 8-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_avg_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Average packed unsigned 16-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_bitshuffle_epi64_maskExperimental(x86 or x86-64) and avx512bitalg,avx512vl
Considers the input b as packed 64-bit integers and c as packed 8-bit integers. Then groups 8 8-bit values from cas indices into the the bits of the corresponding 64-bit integer. It then selects these bits and packs them into the output.
_mm256_mask_blend_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Blend packed 8-bit integers from a and b using control mask k, and store the results in dst.
_mm256_mask_blend_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Blend packed 16-bit integers from a and b using control mask k, and store the results in dst.
_mm256_mask_blend_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Blend packed 32-bit integers from a and b using control mask k, and store the results in dst.
_mm256_mask_blend_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Blend packed 64-bit integers from a and b using control mask k, and store the results in dst.
_mm256_mask_blend_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Blend packed double-precision (64-bit) floating-point elements from a and b using control mask k, and store the results in dst.
_mm256_mask_blend_psExperimental(x86 or x86-64) and avx512f,avx512vl
Blend packed single-precision (32-bit) floating-point elements from a and b using control mask k, and store the results in dst.
_mm256_mask_broadcast_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcast_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed 32-bit integers from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastb_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast the low packed 8-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low packed 32-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastq_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low packed 64-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastsd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low double-precision (64-bit) floating-point element from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastss_psExperimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low single-precision (32-bit) floating-point element from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_broadcastw_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast the low packed 16-bit integer from a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cmp_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_pd_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmp_ps_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 32-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 64-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpeq_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for equality, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpge_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpgt_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmple_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmplt_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed 32-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epu8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epu16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epu32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_cmpneq_epu64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b for not-equal, and store the results in mask vector k using zeromask k1 (elements are zeroed out when the corresponding mask bit is not set).
_mm256_mask_compress_epi8Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 8-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compress_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 16-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compress_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 32-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compress_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 64-bit integers in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compress_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active double-precision (64-bit) floating-point elements in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compress_psExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active single-precision (32-bit) floating-point elements in a (those with their respective bit set in writemask k) to dst, and pass through the remaining elements from src.
_mm256_mask_compressstoreu_epi8Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 8-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_compressstoreu_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 16-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_compressstoreu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 32-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_compressstoreu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 64-bit integers in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_compressstoreu_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active double-precision (64-bit) floating-point elements in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_compressstoreu_psExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active single-precision (32-bit) floating-point elements in a (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_conflict_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit using writemask k (elements are copied from src when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
_mm256_mask_conflict_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit using writemask k (elements are copied from src when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
_mm256_mask_cvt_roundps_phExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of: (_MM_FROUND_TO_NEAREST_INT |_MM_FROUND_NO_EXC) // round to nearest, and suppress exceptions
(_MM_FROUND_TO_NEG_INF |_MM_FROUND_NO_EXC) // round down, and suppress exceptions
(_MM_FROUND_TO_POS_INF |_MM_FROUND_NO_EXC) // round up, and suppress exceptions
(_MM_FROUND_TO_ZERO |_MM_FROUND_NO_EXC) // truncate, and suppress exceptions
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_mask_cvtepi8_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Sign extend packed 8-bit integers in a to packed 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi8_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 8-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi8_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi16_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 16-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi16_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 16-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi16_storeu_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi32_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 32-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi32_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi32_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi32_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepi32_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepi64_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepi64_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepi64_storeu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtepu8_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu8_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 8-bit integers in the low 8 bytes of a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu8_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu16_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu16_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 16-bit integers in the low 8 bytes of a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu32_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtepu32_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtne2ps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in two vectors a and b to packed BF16 (16-bit) floating-point elements and and store the results in single vector dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
_mm256_mask_cvtneps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
_mm256_mask_cvtpd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtpd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtph_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtps_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtps_phExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_mask_cvtsepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi16_storeu_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtsepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi32_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtsepi32_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtsepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtsepi64_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtsepi64_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtsepi64_storeu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvttpd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvttpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvttps_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvttps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi16_storeu_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtusepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi32_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtusepi32_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtusepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_cvtusepi64_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed 8-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtusepi64_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed 16-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_cvtusepi64_storeu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed 32-bit integers with unsigned saturation, and store the active results (those with their respective bit set in writemask k) to unaligned memory at base_addr.
_mm256_mask_dbsad_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
_mm256_mask_div_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_div_psExperimental(x86 or x86-64) and avx512f,avx512vl
Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_dpbf16_psExperimental(x86 or x86-64) and avx512bf16,avx512vl
Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). Intel’s documentation
_mm256_mask_dpbusd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_dpbusds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_dpwssd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_dpwssds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_epi8Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Load contiguous active 8-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Load contiguous active 16-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active 32-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active 64-bit integers from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active double-precision (64-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expand_psExperimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active single-precision (32-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_epi8Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vbmi2,avx512vl,avx
Load contiguous active 8-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_epi16Experimental(x86 or x86-64) and avx512f,avx512vbmi2,avx512vl,avx
Load contiguous active 16-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active 32-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active 64-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active single-precision (64-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_expandloadu_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active single-precision (32-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_extractf32x4_psExperimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_extracti32x4_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM1, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_fixupimm_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
_mm256_mask_fixupimm_psExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
_mm256_mask_fmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmaddsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmaddsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmsubadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fmsubadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fnmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fnmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fnmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_fnmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_getexp_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_mask_getexp_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_mask_getmant_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_mask_getmant_psExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_mask_gf2p8affine_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_mask_gf2p8affineinv_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_mask_gf2p8mul_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
_mm256_mask_insertf32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to tmp, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into tmp at the location specified by imm8. Store tmp to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_inserti32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to tmp, then insert 128 bits (composed of 4 packed 32-bit integers) from b into tmp at the location specified by imm8. Store tmp to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_load_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 32-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_load_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 64-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_load_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed double-precision (64-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_load_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed single-precision (32-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_loadu_epi8Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Load packed 8-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_loadu_epi16Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Load packed 16-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_loadu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 32-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_loadu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 64-bit integers from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_loadu_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed double-precision (64-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_loadu_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed single-precision (32-bit) floating-point elements from memory into dst using writemask k (elements are copied from src when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_lzcnt_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 32-bit integer in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_lzcnt_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 64-bit integer in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_madd_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers, and pack the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_maddubs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed unsigned 8-bit integers in a by packed signed 8-bit integers in b, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers, and pack the saturated results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_max_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_min_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Move packed 8-bit integers from a into dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Move packed 16-bit integers from a into dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Move packed 32-bit integers from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Move packed 64-bit integers from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Move packed double-precision (64-bit) floating-point elements from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mov_psExperimental(x86 or x86-64) and avx512f,avx512vl
Move packed single-precision (32-bit) floating-point elements from a to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_movedup_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_movehdup_psExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_moveldup_psExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mul_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mul_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mul_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mul_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mulhi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed signed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mulhi_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed unsigned 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mulhrs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and store bits [16:1] to dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mullo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the low 16 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_mullo_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_multishift_epi64_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
For each 64-bit element in b, select 8 unaligned bytes using a byte-granular shift control within the corresponding 64-bit element of a, and store the 8 assembled bytes to the corresponding 64-bit element of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_or_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_or_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_packs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers from a and b to packed 8-bit integers using signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_packs_epi32Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 32-bit integers from a and b to packed 16-bit integers using signed saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_packus_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_packus_epi32Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permute_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permute_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutevar_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutevar_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutex2var_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex2var_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex2var_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex2var_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex2var_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex2var_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_permutex_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutex_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_permutexvar_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_popcnt_epi8Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 8-bit integer maps the value to the number of logical 1 bits.
_mm256_mask_popcnt_epi16Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 16-bit integer maps the value to the number of logical 1 bits.
_mm256_mask_popcnt_epi32Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 32-bit integer maps the value to the number of logical 1 bits.
_mm256_mask_popcnt_epi64Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 64-bit integer maps the value to the number of logical 1 bits.
_mm256_mask_rcp14_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_mask_rcp14_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_mask_rol_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_rol_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_rolv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_rolv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_ror_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_ror_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_rorv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_rorv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_roundscale_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed double-precision (64-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_mask_roundscale_psExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed single-precision (32-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_mask_rsqrt14_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_mask_rsqrt14_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_mask_scalef_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed double-precision (64-bit) floating-point elements in a using values from b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_scalef_psExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed single-precision (32-bit) floating-point elements in a using values from b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_set1_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast 8-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_set1_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast 16-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_set1_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast 32-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_set1_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast 64-bit integer a to all elements of dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shldi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by imm8 bits, and store the upper 16-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shldi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by imm8 bits, and store the upper 32-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shldi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by imm8 bits, and store the upper 64-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shldv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 16-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shldv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 32-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shldv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 64-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shrdi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by imm8 bits, and store the lower 16-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shrdi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by imm8 bits, and store the lower 32-bits in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shrdi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by imm8 bits, and store the lower 64-bits in dst using writemask k (elements are copied from src“ when the corresponding mask bit is not set).
_mm256_mask_shrdv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 16-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shrdv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 32-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shrdv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 64-bits in dst using writemask k (elements are copied from a when the corresponding mask bit is not set).
_mm256_mask_shuffle_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 8-bit integers in a within 128-bit lanes using the control in the corresponding 8-bit element of b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_f64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_i64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shuffle_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shufflehi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in the high 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the high 64 bits of 128-bit lanes of dst, with the low 64 bits of 128-bit lanes being copied from from a to dst, using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_shufflelo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in the low 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the low 64 bits of 128-bit lanes of dst, with the high 64 bits of 128-bit lanes being copied from from a to dst, using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sll_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sll_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sll_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_slli_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_slli_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_slli_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sllv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sllv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sllv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sqrt_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sqrt_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sra_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sra_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sra_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srai_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srai_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srai_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srav_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srav_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srav_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srl_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srl_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srl_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srli_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srli_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srli_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srlv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srlv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_srlv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_store_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed 32-bit integers from a into memory using writemask k. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_store_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed 64-bit integers from a into memory using writemask k. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_store_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed double-precision (64-bit) floating-point elements from a into memory using writemask k. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_store_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed single-precision (32-bit) floating-point elements from a into memory using writemask k. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_mask_storeu_epi8Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Store packed 8-bit integers from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_storeu_epi16Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Store packed 16-bit integers from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_storeu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed 32-bit integers from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_storeu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed 64-bit integers from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_storeu_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed double-precision (64-bit) floating-point elements from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_storeu_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Store packed single-precision (32-bit) floating-point elements from a into memory using writemask k. mem_addr does not need to be aligned on any particular boundary.
_mm256_mask_sub_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed 8-bit integers in b from packed 8-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sub_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed 16-bit integers in b from packed 16-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sub_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sub_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_sub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_subs_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed signed 8-bit integers in b from packed 8-bit integers in a using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_subs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed signed 16-bit integers in b from packed 16-bit integers in a using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_subs_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_subs_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_ternarylogic_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 32-bit integer, the corresponding bit from src, a, and b are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst using writemask k at 32-bit granularity (32-bit elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_ternarylogic_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 64-bit integer, the corresponding bit from src, a, and b are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst using writemask k at 64-bit granularity (64-bit elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_test_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise AND of packed 8-bit integers in a and b, producing intermediate 8-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is non-zero.
_mm256_mask_test_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise AND of packed 16-bit integers in a and b, producing intermediate 16-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is non-zero.
_mm256_mask_test_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 32-bit integers in a and b, producing intermediate 32-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is non-zero.
_mm256_mask_test_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 64-bit integers in a and b, producing intermediate 64-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is non-zero.
_mm256_mask_testn_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise NAND of packed 8-bit integers in a and b, producing intermediate 8-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is zero.
_mm256_mask_testn_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise NAND of packed 16-bit integers in a and b, producing intermediate 16-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is zero.
_mm256_mask_testn_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NAND of packed 32-bit integers in a and b, producing intermediate 32-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is zero.
_mm256_mask_testn_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NAND of packed 64-bit integers in a and b, producing intermediate 64-bit values, and set the corresponding bit in result mask k (subject to writemask k) if the intermediate value is zero.
_mm256_mask_unpackhi_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 8-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpackhi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 16-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpackhi_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 32-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpackhi_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 64-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpackhi_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave double-precision (64-bit) floating-point elements from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpackhi_psExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave single-precision (32-bit) floating-point elements from the high half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 8-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 16-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 32-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 64-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave double-precision (64-bit) floating-point elements from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_unpacklo_psExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave single-precision (32-bit) floating-point elements from the low half of each 128-bit lane in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_xor_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_mask_xor_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_maskz_abs_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the absolute value of packed signed 8-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_abs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the absolute value of packed signed 16-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_abs_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the absolute value of packed signed 32-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_abs_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed 8-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed 16-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Add packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Add packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_add_psExperimental(x86 or x86-64) and avx512f,avx512vl
Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_adds_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed signed 8-bit integers in a and b using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_adds_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed signed 16-bit integers in a and b using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_adds_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed unsigned 8-bit integers in a and b using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_adds_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Add packed unsigned 16-bit integers in a and b using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_alignr_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Concatenate pairs of 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by imm8 bytes, and store the low 16 bytes in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_alignr_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 32 bytes (8 elements) in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_alignr_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Concatenate a and b into a 64-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 32 bytes (4 elements) in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_and_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_and_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_andnot_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NOT of packed 32-bit integers in a and then AND with b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_andnot_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NOT of packed 64-bit integers in a and then AND with b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_avg_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Average packed unsigned 8-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_avg_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Average packed unsigned 16-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcast_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcast_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the 4 packed 32-bit integers from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastb_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast the low packed 8-bit integer from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low packed 32-bit integer from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastq_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low packed 64-bit integer from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastsd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low double-precision (64-bit) floating-point element from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastss_psExperimental(x86 or x86-64) and avx512f,avx512vl
Broadcast the low single-precision (32-bit) floating-point element from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_broadcastw_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast the low packed 16-bit integer from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_compress_epi8Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 8-bit integers in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_compress_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Contiguously store the active 16-bit integers in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_compress_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 32-bit integers in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_compress_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active 64-bit integers in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_compress_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active double-precision (64-bit) floating-point elements in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_compress_psExperimental(x86 or x86-64) and avx512f,avx512vl
Contiguously store the active single-precision (32-bit) floating-point elements in a (those with their respective bit set in zeromask k) to dst, and set the remaining elements to zero.
_mm256_maskz_conflict_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
_mm256_maskz_conflict_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Each element’s comparison forms a zero extended bit vector in dst.
_mm256_maskz_cvt_roundps_phExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
(_MM_FROUND_TO_NEAREST_INT |_MM_FROUND_NO_EXC) // round to nearest, and suppress exceptions
(_MM_FROUND_TO_NEG_INF |_MM_FROUND_NO_EXC) // round down, and suppress exceptions
(_MM_FROUND_TO_POS_INF |_MM_FROUND_NO_EXC) // round up, and suppress exceptions
(_MM_FROUND_TO_ZERO |_MM_FROUND_NO_EXC) // truncate, and suppress exceptions
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_maskz_cvtepi8_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Sign extend packed 8-bit integers in a to packed 16-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi8_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 8-bit integers in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi8_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi16_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 16-bit integers in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi16_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 16-bit integers in a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi32_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Sign extend packed 32-bit integers in a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi32_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi32_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu8_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu8_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 8-bit integers in the low 8 bytes of a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu8_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 8-bit integers in the low 4 bytes of a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu16_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu16_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 16-bit integers in the low 8 bytes of a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu32_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtepu32_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtne2ps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in two vectors a and b to packed BF16 (16-bit) floating-point elements, and store the results in single vector dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Intel’s documentation
_mm256_maskz_cvtneps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Intel’s documentation
_mm256_maskz_cvtpd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtpd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtph_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtps_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtps_phExperimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_maskz_cvtsepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtsepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtsepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
_mm256_maskz_cvtsepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtsepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtsepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvttpd_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvttpd_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvttps_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvttps_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed double-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi16_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi32_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi32_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi64_epi8Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi64_epi16Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_cvtusepi64_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_dbsad_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
_mm256_maskz_div_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_div_psExperimental(x86 or x86-64) and avx512f,avx512vl
Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_dpbf16_psExperimental(x86 or x86-64) and avx512bf16,avx512vl
Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). Intel’s documentation
_mm256_maskz_dpbusd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_dpbusds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_dpwssd_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_dpwssds_epi32Experimental(x86 or x86-64) and avx512vnni,avx512vl
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_epi8Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Load contiguous active 8-bit integers from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Load contiguous active 16-bit integers from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active 32-bit integers from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active 64-bit integers from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active double-precision (64-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expand_psExperimental(x86 or x86-64) and avx512f,avx512vl
Load contiguous active single-precision (32-bit) floating-point elements from a (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_epi8Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vbmi2,avx512vl,avx
Load contiguous active 8-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_epi16Experimental(x86 or x86-64) and avx512f,avx512vbmi2,avx512vl,avx
Load contiguous active 16-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active 32-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active 64-bit integers from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active single-precision (64-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_expandloadu_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load contiguous active single-precision (32-bit) floating-point elements from unaligned memory at mem_addr (those with their respective bit set in mask k), and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_extractf32x4_psExperimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_extracti32x4_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM1, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fixupimm_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
_mm256_maskz_fixupimm_psExperimental(x86 or x86-64) and avx512f,avx512vl
Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). imm8 is used to set the required flags reporting.
_mm256_maskz_fmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmaddsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmaddsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmsubadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fmsubadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fnmadd_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fnmadd_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fnmsub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_fnmsub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_getexp_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_maskz_getexp_psExperimental(x86 or x86-64) and avx512f,avx512vl
Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates floor(log2(x)) for each element.
_mm256_maskz_getmant_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_maskz_getmant_psExperimental(x86 or x86-64) and avx512f,avx512vl
Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm256_maskz_gf2p8affine_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_maskz_gf2p8affineinv_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm256_maskz_gf2p8mul_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512vl
Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
_mm256_maskz_insertf32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to tmp, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into tmp at the location specified by imm8. Store tmp to dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_inserti32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Copy a to tmp, then insert 128 bits (composed of 4 packed 32-bit integers) from b into tmp at the location specified by imm8. Store tmp to dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_load_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 32-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_maskz_load_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 64-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_maskz_load_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed double-precision (64-bit) floating-point elements from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_maskz_load_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed single-precision (32-bit) floating-point elements from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_maskz_loadu_epi8Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Load packed 8-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_loadu_epi16Experimental(x86 or x86-64) and avx512f,avx512bw,avx512vl,avx
Load packed 16-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_loadu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 32-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_loadu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed 64-bit integers from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_loadu_pdExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed double-precision (64-bit) floating-point elements from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_loadu_psExperimental(x86 or x86-64) and avx512f,avx512vl,avx
Load packed single-precision (32-bit) floating-point elements from memory into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). mem_addr does not need to be aligned on any particular boundary.
_mm256_maskz_lzcnt_epi32Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 32-bit integer in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_lzcnt_epi64Experimental(x86 or x86-64) and avx512cd,avx512vl
Counts the number of leading zero bits in each packed 64-bit integer in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_madd_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers, and pack the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_maddubs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed unsigned 8-bit integers in a by packed signed 8-bit integers in b, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers, and pack the saturated results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_max_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed maximum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 8-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed signed 16-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 8-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Compare packed unsigned 16-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 32-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed double-precision (64-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_min_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compare packed single-precision (32-bit) floating-point elements in a and b, and store packed minimum values in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Move packed 8-bit integers from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Move packed 16-bit integers from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Move packed 32-bit integers from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Move packed 64-bit integers from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Move packed double-precision (64-bit) floating-point elements from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mov_psExperimental(x86 or x86-64) and avx512f,avx512vl
Move packed single-precision (32-bit) floating-point elements from a into dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_movedup_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate even-indexed double-precision (64-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_movehdup_psExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_moveldup_psExperimental(x86 or x86-64) and avx512f,avx512vl
Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mul_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the low signed 32-bit integers from each packed 64-bit element in a and b, and store the signed 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mul_epu32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b, and store the unsigned 64-bit results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mul_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mul_psExperimental(x86 or x86-64) and avx512f,avx512vl
Multiply packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mulhi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed signed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mulhi_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed unsigned 16-bit integers in a and b, producing intermediate 32-bit integers, and store the high 16 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mulhrs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and store bits [16:1] to dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mullo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit integers, and store the low 16 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_mullo_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and store the low 32 bits of the intermediate integers in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_multishift_epi64_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
For each 64-bit element in b, select 8 unaligned bytes using a byte-granular shift control within the corresponding 64-bit element of a, and store the 8 assembled bytes to the corresponding 64-bit element of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_or_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_or_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_packs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers from a and b to packed 8-bit integers using signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_packs_epi32Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 32-bit integers from a and b to packed 16-bit integers using signed saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_packus_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_packus_epi32Experimental(x86 or x86-64) and avx512bw,avx512vl
Convert packed signed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permute_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permute_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutevar_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutevar_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex2var_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutex_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_permutexvar_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_popcnt_epi8Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 8-bit integer maps the value to the number of logical 1 bits.
_mm256_maskz_popcnt_epi16Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 16-bit integer maps the value to the number of logical 1 bits.
_mm256_maskz_popcnt_epi32Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 32-bit integer maps the value to the number of logical 1 bits.
_mm256_maskz_popcnt_epi64Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 64-bit integer maps the value to the number of logical 1 bits.
_mm256_maskz_rcp14_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_maskz_rcp14_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_maskz_rol_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_rol_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_rolv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_rolv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_ror_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_ror_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_rorv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_rorv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_roundscale_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed double-precision (64-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_maskz_roundscale_psExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed single-precision (32-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_maskz_rsqrt14_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_maskz_rsqrt14_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set). The maximum relative error for this approximation is less than 2^-14.
_mm256_maskz_scalef_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed double-precision (64-bit) floating-point elements in a using values from b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_scalef_psExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed single-precision (32-bit) floating-point elements in a using values from b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_set1_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast 8-bit integer a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_set1_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Broadcast the low packed 16-bit integer from a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_set1_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast 32-bit integer a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_set1_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Broadcast 64-bit integer a to all elements of dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by imm8 bits, and store the upper 16-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by imm8 bits, and store the upper 32-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by imm8 bits, and store the upper 64-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 16-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 32-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shldv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 64-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by imm8 bits, and store the lower 16-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by imm8 bits, and store the lower 32-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by imm8 bits, and store the lower 64-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 16-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 32-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shrdv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 64-bits in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle packed 8-bit integers in a according to shuffle control mask in the corresponding 8-bit element of b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_f64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_i64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shuffle_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shufflehi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in the high 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the high 64 bits of 128-bit lanes of dst, with the low 64 bits of 128-bit lanes being copied from from a to dst, using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_shufflelo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in the low 64 bits of 128-bit lanes of a using the control in imm8. Store the results in the low 64 bits of 128-bit lanes of dst, with the high 64 bits of 128-bit lanes being copied from from a to dst, using writemask k (elements are copied from src when the corresponding mask bit is not set).
_mm256_maskz_sll_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sll_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sll_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_slli_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_slli_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_slli_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sllv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sllv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sllv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sqrt_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the square root of packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sqrt_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the square root of packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sra_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sra_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sra_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srai_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srai_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srai_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srav_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srav_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srav_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srl_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srl_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srl_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srli_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srli_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srli_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srlv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srlv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_srlv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed 8-bit integers in b from packed 8-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed 16-bit integers in b from packed 16-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed 32-bit integers in b from packed 32-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed 64-bit integers in b from packed 64-bit integers in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed double-precision (64-bit) floating-point elements in b from packed double-precision (64-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_sub_psExperimental(x86 or x86-64) and avx512f,avx512vl
Subtract packed single-precision (32-bit) floating-point elements in b from packed single-precision (32-bit) floating-point elements in a, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_subs_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed signed 8-bit integers in b from packed 8-bit integers in a using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_subs_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed signed 16-bit integers in b from packed 16-bit integers in a using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_subs_epu8Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_subs_epu16Experimental(x86 or x86-64) and avx512bw,avx512vl
Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_ternarylogic_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 32-bit integer, the corresponding bit from a, b, and c are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst using zeromask k at 32-bit granularity (32-bit elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_ternarylogic_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 64-bit integer, the corresponding bit from a, b, and c are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst using zeromask k at 64-bit granularity (64-bit elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 8-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 16-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 32-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 64-bit integers from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave double-precision (64-bit) floating-point elements from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpackhi_psExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave single-precision (32-bit) floating-point elements from the high half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 8-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Unpack and interleave 16-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 32-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave 64-bit integers from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave double-precision (64-bit) floating-point elements from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_unpacklo_psExperimental(x86 or x86-64) and avx512f,avx512vl
Unpack and interleave single-precision (32-bit) floating-point elements from the low half of each 128-bit lane in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_xor_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_maskz_xor_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).
_mm256_max_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed maximum values in dst.
_mm256_max_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed maximum values in dst.
_mm256_min_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed signed 64-bit integers in a and b, and store packed minimum values in dst.
_mm256_min_epu64Experimental(x86 or x86-64) and avx512f,avx512vl
Compare packed unsigned 64-bit integers in a and b, and store packed minimum values in dst.
_mm256_movepi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Set each bit of mask register k based on the most significant bit of the corresponding packed 8-bit integer in a.
_mm256_movepi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Set each bit of mask register k based on the most significant bit of the corresponding packed 16-bit integer in a.
_mm256_movm_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Set each packed 8-bit integer in dst to all ones or all zeros based on the value of the corresponding bit in k.
_mm256_movm_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Set each packed 16-bit integer in dst to all ones or all zeros based on the value of the corresponding bit in k.
_mm256_multishift_epi64_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
For each 64-bit element in b, select 8 unaligned bytes using a byte-granular shift control within the corresponding 64-bit element of a, and store the 8 assembled bytes to the corresponding 64-bit element of dst.
_mm256_or_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 32-bit integers in a and b, and store the results in dst.
_mm256_or_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise OR of packed 64-bit integers in a and b, and store the resut in dst.
_mm256_permutex2var_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex2var_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex2var_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex2var_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex2var_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex2var_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a and b across lanes using the corresponding selector and index in idx, and store the results in dst.
_mm256_permutex_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a within 256-bit lanes using the control in imm8, and store the results in dst.
_mm256_permutex_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in imm8, and store the results in dst.
_mm256_permutexvar_epi8Experimental(x86 or x86-64) and avx512vbmi,avx512vl
Shuffle 8-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.
_mm256_permutexvar_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shuffle 16-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.
_mm256_permutexvar_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 32-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.
_mm256_permutexvar_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 64-bit integers in a across lanes using the corresponding index in idx, and store the results in dst.
_mm256_permutexvar_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle double-precision (64-bit) floating-point elements in a across lanes using the corresponding index in idx, and store the results in dst.
_mm256_permutexvar_psExperimental(x86 or x86-64) and avx512f,avx512vl
Shuffle single-precision (32-bit) floating-point elements in a across lanes using the corresponding index in idx.
_mm256_popcnt_epi8Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 8-bit integer maps the value to the number of logical 1 bits.
_mm256_popcnt_epi16Experimental(x86 or x86-64) and avx512bitalg,avx512vl
For each packed 16-bit integer maps the value to the number of logical 1 bits.
_mm256_popcnt_epi32Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 32-bit integer maps the value to the number of logical 1 bits.
_mm256_popcnt_epi64Experimental(x86 or x86-64) and avx512vpopcntdq,avx512vl
For each packed 64-bit integer maps the value to the number of logical 1 bits.
_mm256_rcp14_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.
_mm256_rcp14_psExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. The maximum relative error for this approximation is less than 2^-14.
_mm256_rol_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst.
_mm256_rol_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in imm8, and store the results in dst.
_mm256_rolv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst.
_mm256_rolv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the left by the number of bits specified in the corresponding element of b, and store the results in dst.
_mm256_ror_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst.
_mm256_ror_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in imm8, and store the results in dst.
_mm256_rorv_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 32-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst.
_mm256_rorv_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Rotate the bits in each packed 64-bit integer in a to the right by the number of bits specified in the corresponding element of b, and store the results in dst.
_mm256_roundscale_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed double-precision (64-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst.
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_roundscale_psExperimental(x86 or x86-64) and avx512f,avx512vl
Round packed single-precision (32-bit) floating-point elements in a to the number of fraction bits specified by imm8, and store the results in dst.
Rounding is done according to the imm8[2:0] parameter, which can be one of:
_MM_FROUND_TO_NEAREST_INT // round to nearest
_MM_FROUND_TO_NEG_INF // round down
_MM_FROUND_TO_POS_INF // round up
_MM_FROUND_TO_ZERO // truncate
_MM_FROUND_CUR_DIRECTION // use MXCSR.RC; see _MM_SET_ROUNDING_MODE
_mm256_scalef_pdExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed double-precision (64-bit) floating-point elements in a using values from b, and store the results in dst.
_mm256_scalef_psExperimental(x86 or x86-64) and avx512f,avx512vl
Scale the packed single-precision (32-bit) floating-point elements in a using values from b, and store the results in dst.
_mm256_shldi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by imm8 bits, and store the upper 16-bits in dst).
_mm256_shldi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by imm8 bits, and store the upper 32-bits in dst.
_mm256_shldi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by imm8 bits, and store the upper 64-bits in dst).
_mm256_shldv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in a and b producing an intermediate 32-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 16-bits in dst.
_mm256_shldv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in a and b producing an intermediate 64-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 32-bits in dst.
_mm256_shldv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in a and b producing an intermediate 128-bit result. Shift the result left by the amount specified in the corresponding element of c, and store the upper 64-bits in dst.
_mm256_shrdi_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by imm8 bits, and store the lower 16-bits in dst.
_mm256_shrdi_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by imm8 bits, and store the lower 32-bits in dst.
_mm256_shrdi_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by imm8 bits, and store the lower 64-bits in dst.
_mm256_shrdv_epi16Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 16-bit integers in b and a producing an intermediate 32-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 16-bits in dst.
_mm256_shrdv_epi32Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 32-bit integers in b and a producing an intermediate 64-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 32-bits in dst.
_mm256_shrdv_epi64Experimental(x86 or x86-64) and avx512vbmi2,avx512vl
Concatenate packed 64-bit integers in b and a producing an intermediate 128-bit result. Shift the result right by the amount specified in the corresponding element of c, and store the lower 64-bits in dst.
_mm256_shuffle_f32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 single-precision (32-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst.
_mm256_shuffle_f64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 double-precision (64-bit) floating-point elements) selected by imm8 from a and b, and store the results in dst.
_mm256_shuffle_i32x4Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 4 32-bit integers) selected by imm8 from a and b, and store the results in dst.
_mm256_shuffle_i64x2Experimental(x86 or x86-64) and avx512f,avx512vl
Shuffle 128-bits (composed of 2 64-bit integers) selected by imm8 from a and b, and store the results in dst.
_mm256_sllv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst.
_mm256_sra_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by count while shifting in sign bits, and store the results in dst.
_mm256_srai_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by imm8 while shifting in sign bits, and store the results in dst.
_mm256_srav_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst.
_mm256_srav_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits, and store the results in dst.
_mm256_srlv_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Shift packed 16-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros, and store the results in dst.
_mm256_store_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Store 256-bits (composed of 8 packed 32-bit integers) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_store_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Store 256-bits (composed of 4 packed 64-bit integers) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.
_mm256_storeu_epi8Experimental(x86 or x86-64) and avx512bw,avx512vl
Store 256-bits (composed of 32 packed 8-bit integers) from a into memory. mem_addr does not need to be aligned on any particular boundary.
_mm256_storeu_epi16Experimental(x86 or x86-64) and avx512bw,avx512vl
Store 256-bits (composed of 16 packed 16-bit integers) from a into memory. mem_addr does not need to be aligned on any particular boundary.
_mm256_storeu_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Store 256-bits (composed of 8 packed 32-bit integers) from a into memory. mem_addr does not need to be aligned on any particular boundary.
_mm256_storeu_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Store 256-bits (composed of 4 packed 64-bit integers) from a into memory. mem_addr does not need to be aligned on any particular boundary.
_mm256_ternarylogic_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 32-bit integer, the corresponding bit from a, b, and c are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst.
_mm256_ternarylogic_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Bitwise ternary logic that provides the capability to implement any three-operand binary function; the specific binary function is specified by value in imm8. For each bit in each packed 64-bit integer, the corresponding bit from a, b, and c are used to form a 3 bit index into imm8, and the value at that bit in imm8 is written to the corresponding bit in dst.
_mm256_test_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise AND of packed 8-bit integers in a and b, producing intermediate 8-bit values, and set the corresponding bit in result mask k if the intermediate value is non-zero.
_mm256_test_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise AND of packed 16-bit integers in a and b, producing intermediate 16-bit values, and set the corresponding bit in result mask k if the intermediate value is non-zero.
_mm256_test_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 32-bit integers in a and b, producing intermediate 32-bit values, and set the corresponding bit in result mask k if the intermediate value is non-zero.
_mm256_test_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise AND of packed 64-bit integers in a and b, producing intermediate 64-bit values, and set the corresponding bit in result mask k if the intermediate value is non-zero.
_mm256_testn_epi8_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise NAND of packed 8-bit integers in a and b, producing intermediate 8-bit values, and set the corresponding bit in result mask k if the intermediate value is zero.
_mm256_testn_epi16_maskExperimental(x86 or x86-64) and avx512bw,avx512vl
Compute the bitwise NAND of packed 16-bit integers in a and b, producing intermediate 16-bit values, and set the corresponding bit in result mask k if the intermediate value is zero.
_mm256_testn_epi32_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NAND of packed 32-bit integers in a and b, producing intermediate 32-bit values, and set the corresponding bit in result mask k if the intermediate value is zero.
_mm256_testn_epi64_maskExperimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise NAND of packed 64-bit integers in a and b, producing intermediate 64-bit values, and set the corresponding bit in result mask k if the intermediate value is zero.
_mm256_xor_epi32Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 32-bit integers in a and b, and store the results in dst.
_mm256_xor_epi64Experimental(x86 or x86-64) and avx512f,avx512vl
Compute the bitwise XOR of packed 64-bit integers in a and b, and store the results in dst.
_mm512_abs_epi8Experimental(x86 or x86-64) and avx512bw
Compute the absolute value of packed signed 8-bit integers in a, and store the unsigned results in dst.
_mm512_abs_epi16Experimental(x86 or x86-64) and avx512bw
Compute the absolute value of packed signed 16-bit integers in a, and store the unsigned results in dst.
_mm512_abs_epi32Experimental(x86 or x86-64) and avx512f
Computes the absolute values of packed 32-bit integers in a.
_mm512_abs_epi64Experimental(x86 or x86-64) and avx512f
Compute the absolute value of packed signed 64-bit integers in a, and store the unsigned results in dst.
_mm512_abs_pdExperimental(x86 or x86-64) and avx512f
Finds the absolute value of each packed double-precision (64-bit) floating-point element in v2, storing the results in dst.
_mm512_abs_psExperimental(x86 or x86-64) and avx512f
Finds the absolute value of each packed single-precision (32-bit) floating-point element in v2, storing the results in dst.
_mm512_add_epi8Experimental(x86 or x86-64) and avx512bw
Add packed 8-bit integers in a and b, and store the results in dst.
_mm512_add_epi16Experimental(x86 or x86-64) and avx512bw
Add packed 16-bit integers in a and b, and store the results in dst.
_mm512_add_epi32Experimental(x86 or x86-64) and avx512f
Add packed 32-bit integers in a and b, and store the results in dst.
_mm512_add_epi64Experimental(x86 or x86-64) and avx512f
Add packed 64-bit integers in a and b, and store the results in dst.
_mm512_add_pdExperimental(x86 or x86-64) and avx512f
Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.
_mm512_add_psExperimental(x86 or x86-64) and avx512f
Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.
_mm512_add_round_pdExperimental(x86 or x86-64) and avx512f
Add packed double-precision (64-bit) floating-point elements in a and b, and store the results in dst.
_mm512_add_round_psExperimental(x86 or x86-64) and avx512f
Add packed single-precision (32-bit) floating-point elements in a and b, and store the results in dst.
_mm512_adds_epi8Experimental(x86 or x86-64) and avx512bw
Add packed signed 8-bit integers in a and b using saturation, and store the results in dst.
_mm512_adds_epi16Experimental(x86 or x86-64) and avx512bw
Add packed signed 16-bit integers in a and b using saturation, and store the results in dst.
_mm512_adds_epu8Experimental(x86 or x86-64) and avx512bw
Add packed unsigned 8-bit integers in a and b using saturation, and store the results in dst.
_mm512_adds_epu16Experimental(x86 or x86-64) and avx512bw
Add packed unsigned 16-bit integers in a and b using saturation, and store the results in dst.
_mm512_aesdec_epi128Experimental(x86 or x86-64) and avx512vaes,avx512f
Performs one round of an AES decryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm512_aesdeclast_epi128Experimental(x86 or x86-64) and avx512vaes,avx512f
Performs the last round of an AES decryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm512_aesenc_epi128Experimental(x86 or x86-64) and avx512vaes,avx512f
Performs one round of an AES encryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm512_aesenclast_epi128Experimental(x86 or x86-64) and avx512vaes,avx512f
Performs the last round of an AES encryption flow on each 128-bit word (state) in a using the corresponding 128-bit word (key) in round_key.
_mm512_alignr_epi8Experimental(x86 or x86-64) and avx512bw
Concatenate pairs of 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by imm8 bytes, and store the low 16 bytes in dst.
_mm512_alignr_epi32Experimental(x86 or x86-64) and avx512f
Concatenate a and b into a 128-byte immediate result, shift the result right by imm8 32-bit elements, and store the low 64 bytes (16 elements) in dst.
_mm512_alignr_epi64Experimental(x86 or x86-64) and avx512f
Concatenate a and b into a 128-byte immediate result, shift the result right by imm8 64-bit elements, and store the low 64 bytes (8 elements) in dst.
_mm512_and_epi32Experimental(x86 or x86-64) and avx512f
Compute the bitwise AND of packed 32-bit integers in a and b, and store the results in dst.
_mm512_and_epi64Experimental(x86 or x86-64) and avx512f
Compute the bitwise AND of 512 bits (composed of packed 64-bit integers) in a and b, and store the results in dst.
_mm512_and_si512Experimental(x86 or x86-64) and avx512f
Compute the bitwise AND of 512 bits (representing integer data) in a and b, and store the result in dst.
_mm512_andnot_epi32Experimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of packed 32-bit integers in a and then AND with b, and store the results in dst.
_mm512_andnot_epi64Experimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 512 bits (composed of packed 64-bit integers) in a and then AND with b, and store the results in dst.
_mm512_andnot_si512Experimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 512 bits (representing integer data) in a and then AND with b, and store the result in dst.
_mm512_avg_epu8Experimental(x86 or x86-64) and avx512bw
Average packed unsigned 8-bit integers in a and b, and store the results in dst.
_mm512_avg_epu16Experimental(x86 or x86-64) and avx512bw
Average packed unsigned 16-bit integers in a and b, and store the results in dst.
_mm512_bitshuffle_epi64_maskExperimental(x86 or x86-64) and avx512bitalg
Considers the input b as packed 64-bit integers and c as packed 8-bit integers. Then groups 8 8-bit values from cas indices into the the bits of the corresponding 64-bit integer. It then selects these bits and packs them into the output.
_mm512_broadcast_f32x4Experimental(x86 or x86-64) and avx512f
Broadcast the 4 packed single-precision (32-bit) floating-point elements from a to all elements of dst.
_mm512_broadcast_f64x4Experimental(x86 or x86-64) and avx512f
Broadcast the 4 packed double-precision (64-bit) floating-point elements from a to all elements of dst.
_mm512_broadcast_i32x4Experimental(x86 or x86-64) and avx512f
Broadcast the 4 packed 32-bit integers from a to all elements of dst.
_mm512_broadcast_i64x4Experimental(x86 or x86-64) and avx512f
Broadcast the 4 packed 64-bit integers from a to all elements of dst.
_mm512_broadcastb_epi8Experimental(x86 or x86-64) and avx512bw
Broadcast the low packed 8-bit integer from a to all elements of dst.
_mm512_broadcastd_epi32Experimental(x86 or x86-64) and avx512f
Broadcast the low packed 32-bit integer from a to all elements of dst.
_mm512_broadcastmb_epi64Experimental(x86 or x86-64) and avx512cd
Broadcast the low 8-bits from input mask k to all 64-bit elements of dst.
_mm512_broadcastmw_epi32Experimental(x86 or x86-64) and avx512cd
Broadcast the low 16-bits from input mask k to all 32-bit elements of dst.
_mm512_broadcastq_epi64Experimental(x86 or x86-64) and avx512f
Broadcast the low packed 64-bit integer from a to all elements of dst.
_mm512_broadcastsd_pdExperimental(x86 or x86-64) and avx512f
Broadcast the low double-precision (64-bit) floating-point element from a to all elements of dst.
_mm512_broadcastss_psExperimental(x86 or x86-64) and avx512f
Broadcast the low single-precision (32-bit) floating-point element from a to all elements of dst.
_mm512_broadcastw_epi16Experimental(x86 or x86-64) and avx512bw
Broadcast the low packed 16-bit integer from a to all elements of dst.
_mm512_bslli_epi128Experimental(x86 or x86-64) and avx512bw
Shift 128-bit lanes in a left by imm8 bytes while shifting in zeros, and store the results in dst.
_mm512_bsrli_epi128Experimental(x86 or x86-64) and avx512bw
Shift 128-bit lanes in a right by imm8 bytes while shifting in zeros, and store the results in dst.
_mm512_castpd128_pd512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m128d to type __m512d; the upper 384 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castpd256_pd512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m256d to type __m512d; the upper 256 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castpd512_pd128Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512d to type __m128d. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castpd512_pd256Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512d to type __m256d. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castpd_psExperimental(x86 or x86-64) and avx512f
Cast vector of type __m512d to type __m512. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castpd_si512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512d to type __m512i. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps128_ps512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m128 to type __m512; the upper 384 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps256_ps512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m256 to type __m512; the upper 256 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps512_ps128Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512 to type __m128. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps512_ps256Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512 to type __m256. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps_pdExperimental(x86 or x86-64) and avx512f
Cast vector of type __m512 to type __m512d. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castps_si512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512 to type __m512i. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi128_si512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m128i to type __m512i; the upper 384 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi256_si512Experimental(x86 or x86-64) and avx512f
Cast vector of type __m256i to type __m512i; the upper 256 bits of the result are undefined. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi512_pdExperimental(x86 or x86-64) and avx512f
Cast vector of type __m512i to type __m512d. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi512_psExperimental(x86 or x86-64) and avx512f
Cast vector of type __m512i to type __m512. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi512_si128Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512i to type __m128i. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_castsi512_si256Experimental(x86 or x86-64) and avx512f
Cast vector of type __m512i to type __m256i. This intrinsic is only used for compilation and does not generate any instructions, thus it has zero latency.
_mm512_clmulepi64_epi128Experimental(x86 or x86-64) and avx512vpclmulqdq,avx512f
Performs a carry-less multiplication of two 64-bit polynomials over the finite field GF(2^k) - in each of the 4 128-bit lanes.
_mm512_cmp_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b based on the comparison operand specified by IMM8, and store the results in mask vector k.
_mm512_cmp_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
_mm512_cmp_round_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cmp_round_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8, and store the results in mask vector k.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cmpeq_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed 32-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed 64-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for equality, and store the results in mask vector k.
_mm512_cmpeq_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for equality, and store the results in mask vector k.
_mm512_cmpge_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpge_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for greater-than-or-equal, and store the results in mask vector k.
_mm512_cmpgt_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 32-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmpgt_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for greater-than, and store the results in mask vector k.
_mm512_cmple_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmple_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for less-than-or-equal, and store the results in mask vector k.
_mm512_cmplt_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 32-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for less-than, and store the results in mask vector k.
_mm512_cmplt_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for less-than, and store the results in mask vector k.
_mm512_cmpneq_epi8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 8-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epi16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed signed 16-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epi32_maskExperimental(x86 or x86-64) and avx512f
Compare packed 32-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epi64_maskExperimental(x86 or x86-64) and avx512f
Compare packed signed 64-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epu8_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 8-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epu16_maskExperimental(x86 or x86-64) and avx512bw
Compare packed unsigned 16-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epu32_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 32-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_epu64_maskExperimental(x86 or x86-64) and avx512f
Compare packed unsigned 64-bit integers in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpneq_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for not-equal, and store the results in mask vector k.
_mm512_cmpnle_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for not-less-than-or-equal, and store the results in mask vector k.
_mm512_cmpnle_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for not-less-than-or-equal, and store the results in mask vector k.
_mm512_cmpnlt_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b for not-less-than, and store the results in mask vector k.
_mm512_cmpnlt_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b for not-less-than, and store the results in mask vector k.
_mm512_cmpord_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b to see if neither is NaN, and store the results in mask vector k.
_mm512_cmpord_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b to see if neither is NaN, and store the results in mask vector k.
_mm512_cmpunord_pd_maskExperimental(x86 or x86-64) and avx512f
Compare packed double-precision (64-bit) floating-point elements in a and b to see if either is NaN, and store the results in mask vector k.
_mm512_cmpunord_ps_maskExperimental(x86 or x86-64) and avx512f
Compare packed single-precision (32-bit) floating-point elements in a and b to see if either is NaN, and store the results in mask vector k.
_mm512_conflict_epi32Experimental(x86 or x86-64) and avx512cd
Test each 32-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
_mm512_conflict_epi64Experimental(x86 or x86-64) and avx512cd
Test each 64-bit element of a for equality with all other elements in a closer to the least significant bit. Each element’s comparison forms a zero extended bit vector in dst.
_mm512_cvt_roundepi32_psExperimental(x86 or x86-64) and avx512f
Convert packed signed 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvt_roundepu32_psExperimental(x86 or x86-64) and avx512f
Convert packed unsigned 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvt_roundpd_epi32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.
_mm512_cvt_roundpd_epu32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm512_cvt_roundpd_psExperimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvt_roundph_psExperimental(x86 or x86-64) and avx512f
Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvt_roundps_epi32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.
_mm512_cvt_roundps_epu32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm512_cvt_roundps_pdExperimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvt_roundps_phExperimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvtepi8_epi16Experimental(x86 or x86-64) and avx512bw
Sign extend packed 8-bit integers in a to packed 16-bit integers, and store the results in dst.
_mm512_cvtepi8_epi32Experimental(x86 or x86-64) and avx512f
Sign extend packed 8-bit integers in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtepi8_epi64Experimental(x86 or x86-64) and avx512f
Sign extend packed 8-bit integers in the low 8 bytes of a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepi16_epi8Experimental(x86 or x86-64) and avx512bw
Convert packed 16-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm512_cvtepi16_epi32Experimental(x86 or x86-64) and avx512f
Sign extend packed 16-bit integers in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtepi16_epi64Experimental(x86 or x86-64) and avx512f
Sign extend packed 16-bit integers in a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepi32_epi8Experimental(x86 or x86-64) and avx512f
Convert packed 32-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm512_cvtepi32_epi16Experimental(x86 or x86-64) and avx512f
Convert packed 32-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
_mm512_cvtepi32_epi64Experimental(x86 or x86-64) and avx512f
Sign extend packed 32-bit integers in a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepi32_pdExperimental(x86 or x86-64) and avx512f
Convert packed signed 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
_mm512_cvtepi32_psExperimental(x86 or x86-64) and avx512f
Convert packed signed 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvtepi32lo_pdExperimental(x86 or x86-64) and avx512f
Performs element-by-element conversion of the lower half of packed 32-bit integer elements in v2 to packed double-precision (64-bit) floating-point elements, storing the results in dst.
_mm512_cvtepi64_epi8Experimental(x86 or x86-64) and avx512f
Convert packed 64-bit integers in a to packed 8-bit integers with truncation, and store the results in dst.
_mm512_cvtepi64_epi16Experimental(x86 or x86-64) and avx512f
Convert packed 64-bit integers in a to packed 16-bit integers with truncation, and store the results in dst.
_mm512_cvtepi64_epi32Experimental(x86 or x86-64) and avx512f
Convert packed 64-bit integers in a to packed 32-bit integers with truncation, and store the results in dst.
_mm512_cvtepu8_epi16Experimental(x86 or x86-64) and avx512bw
Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers, and store the results in dst.
_mm512_cvtepu8_epi32Experimental(x86 or x86-64) and avx512f
Zero extend packed unsigned 8-bit integers in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtepu8_epi64Experimental(x86 or x86-64) and avx512f
Zero extend packed unsigned 8-bit integers in the low 8 byte sof a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepu16_epi32Experimental(x86 or x86-64) and avx512f
Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtepu16_epi64Experimental(x86 or x86-64) and avx512f
Zero extend packed unsigned 16-bit integers in a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepu32_epi64Experimental(x86 or x86-64) and avx512f
Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers, and store the results in dst.
_mm512_cvtepu32_pdExperimental(x86 or x86-64) and avx512f
Convert packed unsigned 32-bit integers in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
_mm512_cvtepu32_psExperimental(x86 or x86-64) and avx512f
Convert packed unsigned 32-bit integers in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvtepu32lo_pdExperimental(x86 or x86-64) and avx512f
Performs element-by-element conversion of the lower half of packed 32-bit unsigned integer elements in v2 to packed double-precision (64-bit) floating-point elements, storing the results in dst.
_mm512_cvtne2ps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512f
Convert packed single-precision (32-bit) floating-point elements in two 512-bit vectors a and b to packed BF16 (16-bit) floating-point elements, and store the results in a
512-bit wide vector. Intel’s documentation
_mm512_cvtneps_pbhExperimental(x86 or x86-64) and avx512bf16,avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed BF16 (16-bit) floating-point elements, and store the results in dst. Intel’s documentation
_mm512_cvtpd_epi32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtpd_epu32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm512_cvtpd_psExperimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvtpd_psloExperimental(x86 or x86-64) and avx512f
Performs an element-by-element conversion of packed double-precision (64-bit) floating-point elements in v2 to single-precision (32-bit) floating-point elements and stores them in dst. The elements are stored in the lower half of the results vector, while the remaining upper half locations are set to 0.
_mm512_cvtph_psExperimental(x86 or x86-64) and avx512f
Convert packed half-precision (16-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements, and store the results in dst.
_mm512_cvtps_epi32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers, and store the results in dst.
_mm512_cvtps_epu32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers, and store the results in dst.
_mm512_cvtps_pdExperimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements, and store the results in dst.
_mm512_cvtps_phExperimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed half-precision (16-bit) floating-point elements, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvtpslo_pdExperimental(x86 or x86-64) and avx512f
Performs element-by-element conversion of the lower half of packed single-precision (32-bit) floating-point elements in v2 to packed double-precision (64-bit) floating-point elements, storing the results in dst.
_mm512_cvtsepi16_epi8Experimental(x86 or x86-64) and avx512bw
Convert packed signed 16-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsepi32_epi8Experimental(x86 or x86-64) and avx512f
Convert packed signed 32-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsepi32_epi16Experimental(x86 or x86-64) and avx512f
Convert packed signed 32-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsepi64_epi8Experimental(x86 or x86-64) and avx512f
Convert packed signed 64-bit integers in a to packed 8-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsepi64_epi16Experimental(x86 or x86-64) and avx512f
Convert packed signed 64-bit integers in a to packed 16-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsepi64_epi32Experimental(x86 or x86-64) and avx512f
Convert packed signed 64-bit integers in a to packed 32-bit integers with signed saturation, and store the results in dst.
_mm512_cvtsi512_si32Experimental(x86 or x86-64) and avx512f
Copy the lower 32-bit integer in a to dst.
_mm512_cvtt_roundpd_epi32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvtt_roundpd_epu32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvtt_roundps_epi32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvtt_roundps_epu32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_cvttpd_epi32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.
_mm512_cvttpd_epu32Experimental(x86 or x86-64) and avx512f
Convert packed double-precision (64-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
_mm512_cvttps_epi32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation, and store the results in dst.
_mm512_cvttps_epu32Experimental(x86 or x86-64) and avx512f
Convert packed single-precision (32-bit) floating-point elements in a to packed unsigned 32-bit integers with truncation, and store the results in dst.
_mm512_cvtusepi16_epi8Experimental(x86 or x86-64) and avx512bw
Convert packed unsigned 16-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm512_cvtusepi32_epi8Experimental(x86 or x86-64) and avx512f
Convert packed unsigned 32-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm512_cvtusepi32_epi16Experimental(x86 or x86-64) and avx512f
Convert packed unsigned 32-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
_mm512_cvtusepi64_epi8Experimental(x86 or x86-64) and avx512f
Convert packed unsigned 64-bit integers in a to packed unsigned 8-bit integers with unsigned saturation, and store the results in dst.
_mm512_cvtusepi64_epi16Experimental(x86 or x86-64) and avx512f
Convert packed unsigned 64-bit integers in a to packed unsigned 16-bit integers with unsigned saturation, and store the results in dst.
_mm512_cvtusepi64_epi32Experimental(x86 or x86-64) and avx512f
Convert packed unsigned 64-bit integers in a to packed unsigned 32-bit integers with unsigned saturation, and store the results in dst.
_mm512_dbsad_epu8Experimental(x86 or x86-64) and avx512bw
Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst. Four SADs are performed on four 8-bit quadruplets for each 64-bit lane. The first two SADs use the lower 8-bit quadruplet of the lane from a, and the last two SADs use the uppper 8-bit quadruplet of the lane from a. Quadruplets from b are selected from within 128-bit lanes according to the control in imm8, and each SAD in each 64-bit lane uses the selected quadruplet at 8-bit offsets.
_mm512_div_pdExperimental(x86 or x86-64) and avx512f
Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, and store the results in dst.
_mm512_div_psExperimental(x86 or x86-64) and avx512f
Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst.
_mm512_div_round_pdExperimental(x86 or x86-64) and avx512f
Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b, =and store the results in dst.
_mm512_div_round_psExperimental(x86 or x86-64) and avx512f
Divide packed single-precision (32-bit) floating-point elements in a by packed elements in b, and store the results in dst.
_mm512_dpbf16_psExperimental(x86 or x86-64) and avx512bf16,avx512f
Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst.Compute dot-product of BF16 (16-bit) floating-point pairs in a and b, accumulating the intermediate single-precision (32-bit) floating-point elements with elements in src, and store the results in dst. Intel’s documentation
_mm512_dpbusd_epi32Experimental(x86 or x86-64) and avx512vnni
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
_mm512_dpbusds_epi32Experimental(x86 or x86-64) and avx512vnni
Multiply groups of 4 adjacent pairs of unsigned 8-bit integers in a with corresponding signed 8-bit integers in b, producing 4 intermediate signed 16-bit results. Sum these 4 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
_mm512_dpwssd_epi32Experimental(x86 or x86-64) and avx512vnni
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src, and store the packed 32-bit results in dst.
_mm512_dpwssds_epi32Experimental(x86 or x86-64) and avx512vnni
Multiply groups of 2 adjacent pairs of signed 16-bit integers in a with corresponding 16-bit integers in b, producing 2 intermediate signed 32-bit results. Sum these 2 results with the corresponding 32-bit integer in src using signed saturation, and store the packed 32-bit results in dst.
_mm512_extractf32x4_psExperimental(x86 or x86-64) and avx512f
Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8, and store the result in dst.
_mm512_extractf64x4_pdExperimental(x86 or x86-64) and avx512f
Extract 256 bits (composed of 4 packed double-precision (64-bit) floating-point elements) from a, selected with imm8, and store the result in dst.
_mm512_extracti32x4_epi32Experimental(x86 or x86-64) and avx512f
Extract 128 bits (composed of 4 packed 32-bit integers) from a, selected with IMM2, and store the result in dst.
_mm512_extracti64x4_epi64Experimental(x86 or x86-64) and avx512f
Extract 256 bits (composed of 4 packed 64-bit integers) from a, selected with IMM1, and store the result in dst.
_mm512_fixupimm_pdExperimental(x86 or x86-64) and avx512f
Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm512_fixupimm_psExperimental(x86 or x86-64) and avx512f
Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm512_fixupimm_round_pdExperimental(x86 or x86-64) and avx512f
Fix up packed double-precision (64-bit) floating-point elements in a and b using packed 64-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm512_fixupimm_round_psExperimental(x86 or x86-64) and avx512f
Fix up packed single-precision (32-bit) floating-point elements in a and b using packed 32-bit integers in c, and store the results in dst. imm8 is used to set the required flags reporting.
_mm512_fmadd_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.
_mm512_fmadd_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.
_mm512_fmadd_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.
_mm512_fmadd_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the intermediate result to packed elements in c, and store the results in dst.
_mm512_fmaddsub_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.
_mm512_fmaddsub_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.
_mm512_fmaddsub_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.
_mm512_fmaddsub_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively add and subtract packed elements in c to/from the intermediate result, and store the results in dst.
_mm512_fmsub_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.
_mm512_fmsub_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.
_mm512_fmsub_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.
_mm512_fmsub_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the intermediate result, and store the results in dst.
_mm512_fmsubadd_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.
_mm512_fmsubadd_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.
_mm512_fmsubadd_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.
_mm512_fmsubadd_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, alternatively subtract and add packed elements in c from/to the intermediate result, and store the results in dst.
_mm512_fnmadd_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.
_mm512_fnmadd_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.
_mm512_fnmadd_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.
_mm512_fnmadd_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, add the negated intermediate result to packed elements in c, and store the results in dst.
_mm512_fnmsub_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.
_mm512_fnmsub_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.
_mm512_fnmsub_round_pdExperimental(x86 or x86-64) and avx512f
Multiply packed double-precision (64-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.
_mm512_fnmsub_round_psExperimental(x86 or x86-64) and avx512f
Multiply packed single-precision (32-bit) floating-point elements in a and b, subtract packed elements in c from the negated intermediate result, and store the results in dst.
_mm512_getexp_pdExperimental(x86 or x86-64) and avx512f
Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
_mm512_getexp_psExperimental(x86 or x86-64) and avx512f
Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
_mm512_getexp_round_pdExperimental(x86 or x86-64) and avx512f
Convert the exponent of each packed double-precision (64-bit) floating-point element in a to a double-precision (64-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_getexp_round_psExperimental(x86 or x86-64) and avx512f
Convert the exponent of each packed single-precision (32-bit) floating-point element in a to a single-precision (32-bit) floating-point number representing the integer exponent, and store the results in dst. This intrinsic essentially calculates floor(log2(x)) for each element.
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_getmant_pdExperimental(x86 or x86-64) and avx512f
Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm512_getmant_psExperimental(x86 or x86-64) and avx512f
Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign. The mantissa is normalized to the interval specified by interv, which can take the following values: _MM_MANT_NORM_1_2 // interval [1, 2) _MM_MANT_NORM_p5_2 // interval [0.5, 2) _MM_MANT_NORM_p5_1 // interval [0.5, 1) _MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5) The sign is determined by sc which can take the following values: _MM_MANT_SIGN_src // sign = sign(src) _MM_MANT_SIGN_zero // sign = 0 _MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
_mm512_getmant_round_pdExperimental(x86 or x86-64) and avx512f
Normalize the mantissas of packed double-precision (64-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_getmant_round_psExperimental(x86 or x86-64) and avx512f
Normalize the mantissas of packed single-precision (32-bit) floating-point elements in a, and store the results in dst. This intrinsic essentially calculates ±(2^k)*|x.significand|, where k depends on the interval range defined by interv and the sign depends on sc and the source sign.
The mantissa is normalized to the interval specified by interv, which can take the following values:
_MM_MANT_NORM_1_2 // interval [1, 2)
_MM_MANT_NORM_p5_2 // interval [0.5, 2)
_MM_MANT_NORM_p5_1 // interval [0.5, 1)
_MM_MANT_NORM_p75_1p5 // interval [0.75, 1.5)
The sign is determined by sc which can take the following values:
_MM_MANT_SIGN_src // sign = sign(src)
_MM_MANT_SIGN_zero // sign = 0
_MM_MANT_SIGN_nan // dst = NaN if sign(src) = 1
Exceptions can be suppressed by passing _MM_FROUND_NO_EXC in the sae parameter.
_mm512_gf2p8affine_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512f
Performs an affine transformation on the packed bytes in x. That is computes a*x+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm512_gf2p8affineinv_epi64_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512f
Performs an affine transformation on the inverted packed bytes in x. That is computes a*inv(x)+b over the Galois Field 2^8 for each packed byte with a being a 8x8 bit matrix and b being a constant 8-bit immediate value. The inverse of a byte is defined with respect to the reduction polynomial x^8+x^4+x^3+x+1. The inverse of 0 is 0. Each pack of 8 bytes in x is paired with the 64-bit word at the same position in a.
_mm512_gf2p8mul_epi8Experimental(x86 or x86-64) and avx512gfni,avx512bw,avx512f
Performs a multiplication in GF(2^8) on the packed bytes. The field is in polynomial representation with the reduction polynomial x^8 + x^4 + x^3 + x + 1.
_mm512_i32gather_epi32Experimental(x86 or x86-64) and avx512f
Gather 32-bit integers from memory using 32-bit indices. 32-bit elements are loaded from addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i32gather_epi64Experimental(x86 or x86-64) and avx512f
Gather 64-bit integers from memory using 32-bit indices. 64-bit elements are loaded from addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i32gather_pdExperimental(x86 or x86-64) and avx512f
Gather double-precision (64-bit) floating-point elements from memory using 32-bit indices. 64-bit elements are loaded from addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i32gather_psExperimental(x86 or x86-64) and avx512f
Gather single-precision (32-bit) floating-point elements from memory using 32-bit indices. 32-bit elements are loaded from addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i32scatter_epi32Experimental(x86 or x86-64) and avx512f
Scatter 32-bit integers from a into memory using 32-bit indices. 32-bit elements are stored at addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i32scatter_epi64Experimental(x86 or x86-64) and avx512f
Scatter 64-bit integers from a into memory using 32-bit indices. 64-bit elements are stored at addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i32scatter_pdExperimental(x86 or x86-64) and avx512f
Scatter double-precision (64-bit) floating-point elements from a into memory using 32-bit indices. 64-bit elements are stored at addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i32scatter_psExperimental(x86 or x86-64) and avx512f
Scatter single-precision (32-bit) floating-point elements from a into memory using 32-bit indices. 32-bit elements are stored at addresses starting at base_addr and offset by each 32-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i64gather_epi32Experimental(x86 or x86-64) and avx512f
Gather 32-bit integers from memory using 64-bit indices. 32-bit elements are loaded from addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i64gather_epi64Experimental(x86 or x86-64) and avx512f
Gather 64-bit integers from memory using 64-bit indices. 64-bit elements are loaded from addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i64gather_pdExperimental(x86 or x86-64) and avx512f
Gather double-precision (64-bit) floating-point elements from memory using 64-bit indices. 64-bit elements are loaded from addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i64gather_psExperimental(x86 or x86-64) and avx512f
Gather single-precision (32-bit) floating-point elements from memory using 64-bit indices. 32-bit elements are loaded from addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). Gathered elements are merged into dst. scale should be 1, 2, 4 or 8.
_mm512_i64scatter_epi32Experimental(x86 or x86-64) and avx512f
Scatter 32-bit integers from a into memory using 64-bit indices. 32-bit elements are stored at addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i64scatter_epi64Experimental(x86 or x86-64) and avx512f
Scatter 64-bit integers from a into memory using 64-bit indices. 64-bit elements are stored at addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i64scatter_pdExperimental(x86 or x86-64) and avx512f
Scatter double-precision (64-bit) floating-point elements from a into memory using 64-bit indices. 64-bit elements are stored at addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale). scale should be 1, 2, 4 or 8.
_mm512_i64scatter_psExperimental(x86 or x86-64) and avx512f
Scatter single-precision (32-bit) floating-point elements from a into memory using 64-bit indices. 32-bit elements are stored at addresses starting at base_addr and offset by each 64-bit element in vindex (each index is scaled by the factor in scale) subject to mask k (elements are not stored when the corresponding mask bit is not set). scale should be 1, 2, 4 or 8.
_mm512_insertf32x4Experimental(x86 or x86-64) and avx512f
Copy a to dst, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into dst at the location specified by imm8.
_mm512_insertf64x4Experimental(x86 or x86-64) and avx512f
Copy a to dst, then insert 256 bits (composed of 4 packed double-precision (64-bit) floating-point elements) from b into dst at the location specified by imm8.
_mm512_inserti32x4Experimental(x86 or x86-64) and avx512f
Copy a to dst, then insert 128 bits (composed of 4 packed 32-bit integers) from b into dst at the location specified by imm8.
_mm512_inserti64x4Experimental(x86 or x86-64) and avx512f
Copy a to dst, then insert 256 bits (composed of 4 packed 64-bit integers) from b into dst at the location specified by imm8.
_mm512_int2maskExperimental(x86 or x86-64) and avx512f
Converts integer mask into bitmask, storing the result in dst.
_mm512_kandExperimental(x86 or x86-64) and avx512f
Compute the bitwise AND of 16-bit masks a and b, and store the result in k.
_mm512_kandnExperimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 16-bit masks a and then AND with b, and store the result in k.
_mm512_kmovExperimental(x86 or x86-64) and avx512f
Copy 16-bit mask a to k.
_mm512_knotExperimental(x86 or x86-64) and avx512f
Compute the bitwise NOT of 16-bit mask a, and store the result in k.
_mm512_korExperimental(x86 or x86-64) and avx512f
Compute the bitwise OR of 16-bit masks a and b, and store the result in k.
_mm512_kortestcExperimental(x86 or x86-64) and avx512f
Performs bitwise OR between k1 and k2, storing the result in dst. CF flag is set if dst consists of all 1’s.
_mm512_kunpackbExperimental(x86 or x86-64) and avx512f
Unpack and interleave 8 bits from masks a and b, and store the 16-bit result in k.
_mm512_kxnorExperimental(x86 or x86-64) and avx512f
Compute the bitwise XNOR of 16-bit masks a and b, and store the result in k.
_mm512_kxorExperimental(x86 or x86-64) and avx512f
Compute the bitwise XOR of 16-bit masks a and b, and store the result in k.
_mm512_load_epi32Experimental(x86 or x86-64) and avx512f
Load 512-bits (composed of 16 packed 32-bit integers) from memory into dst. mem_addr must be aligned on a 64-byte boundary or a general-protection exception may be generated.
_mm512_load_epi64Experimental(x86 or x86-64) and avx512f
Load 512-bits (composed of 8 packed 64-bit integers) from memory into dst. mem_addr must be aligned on a 64-byte boundary or a general-protection exception may be generated.
_mm512_load_pdExperimental(x86 or x86-64) and avx512f
Load 512-bits (composed of 8 packed double-precision (64-bit) floating-point elements) from memory into dst. mem_addr must be aligned on a 64-byte boundary or a general-protection exception may be generated.
_mm512_load_psExperimental(x86 or x86-64) and avx512f
Load 512-bits (composed of 16 packed single-precision (32-bit) floating-point elements) from memory into dst. mem_addr must be aligned on a 64-byte boundary or a general-protection exception may be generated.
_mm512_load_si512Experimental(x86 or x86-64) and avx512f
Load 512-bits of integer data from memory into dst. mem_addr must be aligned on a 64-byte boundary or a general-protection exception may be generated.
_mm512_loadu_epi8Experimental(x86 or x86-64) and avx512bw
Load 512-bits (composed of 64 packed 8-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.
_mm512_loadu_epi16Experimental(x86 or x86-64) and avx512bw
Load 512-bits (composed of 32 packed 16-bit integers) from memory into dst. mem_addr does not need to be aligned on any particular boundary.