libgomp.texi (gcn, nvptx): Mention self_maps alongside USM
libgomp/ChangeLog: * libgomp.texi (gcn, nvptx): Mention self_maps clause besides unified_shared_memory in the requirements item.
This commit is contained in:
parent
727f330f9a
commit
1ff4a22103
1 changed files with 2 additions and 2 deletions
|
@ -6888,7 +6888,7 @@ The implementation remark:
|
|||
@code{device(ancestor:1)}) are processed serially per @code{target} region
|
||||
such that the next reverse offload region is only executed after the previous
|
||||
one returned.
|
||||
@item OpenMP code that has a @code{requires} directive with
|
||||
@item OpenMP code that has a @code{requires} directive with @code{self_maps} or
|
||||
@code{unified_shared_memory} is only supported if all AMD GPUs have the
|
||||
@code{HSA_AMD_SYSTEM_INFO_SVM_ACCESSIBLE_BY_DEFAULT} property; for
|
||||
discrete GPUs, this may require setting the @code{HSA_XNACK} environment
|
||||
|
@ -7045,7 +7045,7 @@ The implementation remark:
|
|||
Per device, reverse offload regions are processed serially such that
|
||||
the next reverse offload region is only executed after the previous
|
||||
one returned.
|
||||
@item OpenMP code that has a @code{requires} directive with
|
||||
@item OpenMP code that has a @code{requires} directive with @code{self_maps} or
|
||||
@code{unified_shared_memory} runs on nvptx devices if and only if
|
||||
all of those support the @code{pageableMemoryAccess} property;@footnote{
|
||||
@uref{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements}}
|
||||
|
|
Loading…
Add table
Reference in a new issue