热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

【CUDA】常见错误类型cudaError_t

文章目录CUDA错

文章目录

  • CUDA错误类型
    • 错误类型说明
    • CUDA Error types
    • 参考链接
CUDA错误类型

整理下NVIDIA官方文档中列的CUDA常见错误类型。

错误类型说明

  • cudaSuccess = 0
    API调用返回没有错误。对于查询调用,这还意味着要查询的操作已完成(请参阅cudaEventQuery()和cudaStreamQuery())。
  • cudaErrorInvalidValue = 1
    这表明传递给API调用的一个或多个参数不在可接受的值范围内。
  • cudaErrorMemoryAllocation = 2
    API调用失败,因为它无法分配足够的内存来执行请求的操作。
  • cudaErrorInitializatiOnError= 3
    API调用失败,因为无法初始化CUDA驱动程序和运行时。
  • cudaErrorCudartUnloading = 4
    这表明无法执行CUDA运行时API调用,因为它是在进程关闭期间(在卸载CUDA驱动程序后的某个时间)调用的。
  • cudaErrorProfilerDisabled = 5
    这表明没有为此运行初始化探查器。当应用程序使用外部概要分析工具(如可视化探查器)运行时,可能会发生这种情况。
  • cudaErrorProfilerNotInitialized = 6
    不推荐使用
    从CUDA 5.0开始不推荐使用此错误返回。尝试通过cudaProfilerStart或cudaProfilerStop启用/禁用概要分析而无需初始化不再是错误。
  • cudaErrorProfilerAlreadyStarted = 7
    不推荐使用
    从CUDA 5.0开始不推荐使用此错误返回。已经启用概要分析时,调用cudaProfilerStart()不再是错误。
  • cudaErrorProfilerAlreadyStopped = 8
    不推荐使用
    从CUDA 5.0开始不推荐使用此错误返回。在已禁用分析的情况下,调用cudaProfilerStop()不再是错误。
  • cudaErrorInvalidCOnfiguration= 9
    这表明内核启动正在请求当前设备永远无法满足的资源。每个块请求的共享内存比设备支持的更多,将触发此错误,因为请求过多的线程或块。有关更多设备限制,请参见cudaDeviceProp。
  • cudaErrorInvalidPitchValue = 12
    这表明传递给API调用的一个或多个与音调相关的参数不在音调的可接受范围内。
  • cudaErrorInvalidSymbol = 13
    这表明传递给API调用的符号名称/标识符不是有效的名称或标识符。
  • cudaErrorInvalidHostPointer = 16
    不推荐使用
    从CUDA 10.1开始不推荐使用此错误返回。
    这表明传递给API调用的至少一个主机指针不是有效的主机指针。
  • cudaErrorInvalidDevicePointer = 17
    不推荐使用
    从CUDA 10.1开始不推荐使用此错误返回。
    这表明传递给API调用的至少一个设备指针不是有效的设备指针。
  • cudaErrorInvalidTexture = 18
    这表明传递给API调用的纹理不是有效的纹理。
  • cudaErrorInvalidTextureBinding = 19
    这表明纹理绑定无效。如果您使用未绑定的纹理调用cudaGetTextureAlignmentOffset(),则会发生这种情况。
  • cudaErrorInvalidChannelDescriptor = 20
    这表明传递给API调用的通道描述符无效。如果格式不是cudaChannelFormatKind指定的格式之一,或者尺寸之一无效,则会发生这种情况。
  • cudaErrorInvalidMemcpyDirection = 21
    这表明传递给API调用的memcpy的方向不是cudaMemcpyKind指定的类型之一。
  • cudaErrorAddressOfCOnstant= 22
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。现在,常量内存中的变量现在可以通过cudaGetSymbolAddress()由运行时获取其地址。
    这表明用户使用了常量变量的地址,直到CUDA 3.1发行版才禁止使用该地址。
  • cudaErrorTextureFetchFailed = 23
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    这表明无法执行纹理获取。以前用于纹理操作的设备仿真。
  • cudaErrorTextureNotBound = 24
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    这表明纹理未绑定访问。以前用于纹理操作的设备仿真。
  • cudaErrorSynchrOnizationError= 25
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    这表明同步操作已失败。以前将其用于某些设备仿真功能。
  • cudaErrorInvalidFilterSetting = 26
    这表明正在使用线性过滤访问非浮动纹理。 CUDA不支持此功能。
  • cudaErrorInvalidNormSetting = 27
    这表明试图读取非浮动纹理作为规范化的浮动。 CUDA不支持此功能。
  • cudaErrorMixedDeviceExecution = 28
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    不允许混用设备和设备仿真代码。
  • cudaErrorNotYetImplemented = 31
    不推荐使用
    从CUDA 4.1开始不推荐使用此错误返回。
    这表明该API调用尚未实现。 CUDA的生产版本永远不会返回此错误。
  • cudaErrorMemoryValueTooLarge = 32
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    这表明仿真的设备指针超出了32位地址范围。
  • cudaErrorStubLibrary = 34
    这表明应用程序已加载的CUDA驱动程序是存根库。使用存根而不是实际驱动程序运行的应用程序将导致CUDA API返回此错误。
  • cudaErrorInsufficientDriver = 35
    这表明已安装的NVIDIA CUDA驱动程序早于CUDA运行时库。这不是受支持的配置。用户应安装更新的NVIDIA显示驱动程序以允许应用程序运行。
  • cudaErrorCallRequiresNewerDriver = 36
    这表明API调用需要比当前安装的更新的CUDA驱动程序。用户应安装更新的NVIDIA CUDA驱动程序,以允许API调用成功。
  • cudaErrorInvalidSurface = 37
    这表明传递给API调用的表面不是有效表面。
  • cudaErrorDuplicateVariableName = 43
    这表明多个全局或常量变量(跨应用程序中的单独CUDA源文件)共享相同的字符串名称。
  • cudaErrorDuplicateTextureName = 44
    这表明多个纹理(跨应用程序中的单独CUDA源文件)共享相同的字符串名称。
  • cudaErrorDuplicateSurfaceName = 45
    这表明多个表面(跨应用程序中的单独CUDA源文件)共享相同的字符串名称。
  • cudaErrorDevicesUnavailable = 46
    这表明当前所有CUDA设备正忙或不可用。由于使用cudaComputeModeExclusive,cudaComputeModeProhibited或长时间运行的CUDA内核填满了GPU并阻止了新工作的启动,设备通常很忙/不可用。由于已经执行了活动CUDA工作的设备上的内存限制,它们也可能不可用。
  • cudaErrorIncompatibleDriverCOntext= 49
    这表明当前上下文与此CUDA运行时不兼容。仅当您使用CUDA运行时/驱动程序互操作性并且已使用驱动程序API创建了现有的驱动程序上下文时,才会发生这种情况。驱动程序上下文可能是不兼容的,或者是由于驱动程序上下文是使用较旧版本的API创建的,或者是因为运行时API调用期望使用主驱动程序上下文,而驱动程序上下文不是主要的,或者是因为驱动程序上下文已被破坏。请参阅与CUDA驱动程序API的交互”以了解更多信息。
  • cudaErrorMissingCOnfiguration= 52
    先前未通过cudaConfigureCall()函数配置正在调用的设备功能(通常是通过cudaLaunchKernel())。
  • cudaErrorPriorLaunchFailure = 53
    不推荐使用
    从CUDA 3.1开始不推荐使用此错误返回。 CUDA 3.1发行版删除了设备仿真模式。
    这表明先前的内核启动失败。以前用于内核启动的设备仿真。
  • cudaErrorLaunchMaxDepthExceeded = 65
    此错误表明未发生设备运行时网格启动,因为子网格的深度将超过嵌套网格启动的最大支持数量。
  • cudaErrorLaunchFileScopedTex = 66
    此错误表明未发生网格启动,因为内核使用了设备运行时不支持的文件作用域纹理。通过设备运行时启动的内核仅支持使用Texture Object API创建的纹理。
  • cudaErrorLaunchFileScopedSurf = 67
    此错误表明未发生网格启动,因为内核使用了设备运行时不支持的文件范围的表面。通过设备运行时启动的内核仅支持使用Surface Object API创建的曲面。
  • cudaErrorSyncDepthExceeded = 68
    此错误表示从设备运行时进行的对cudaDeviceSynchronize的调用失败,因为该调用是在大于默认深度(2个网格级别)或用户指定的设备限制cudaLimitDevRuntimeSyncDepth的网格深度进行的。为了能够在更大的深度上成功地在已启动的网格上进行同步,在使用设备运行时在主机端启动内核之前,必须使用对cudaDeviceSetLimit api的cudaLimitDevRuntimeSyncDepth限制来指定将调用cudaDeviceSynchronize的最大嵌套深度。请记住,同步深度的其他级别要求运行时保留不能用于用户分配的大量设备内存。
  • cudaErrorLaunchPendingCountExceeded = 69
    此错误表明设备运行时网格启动失败,因为启动将超出限制cudaLimitDevRuntimePendingLaunchCount。为了使启动成功进行,必须调用cudaDeviceSetLimit才能将cudaLimitDevRuntimePendingLaunchCount设置为高于可以发布给设备运行时的未完成启动的上限。请记住,提高挂起的设备运行时启动的限制将要求运行时保留不能用于用户分配的设备内存。
  • cudaErrorInvalidDeviceFunction = 98
    所请求的设备功能不存在或未针对正确的设备体系结构进行编译。
  • cudaErrorNoDevice = 100
    这表明已安装的CUDA驱动程序未检测到具有CUDA功能的设备。
  • cudaErrorInvalidDevice = 101
    这表明用户提供的设备序号与有效的CUDA设备不对应。
  • cudaErrorDeviceNotLicensed = 102
    这表明设备没有有效的网格许可证。
  • cudaErrorSoftwareValidityNotEstablished = 103
    默认情况下,CUDA运行时可以执行最少的一组自检以及CUDA驱动程序测试,以建立两者的有效性。在CUDA 11.2中引入的此错误返回表明这些测试中至少有一个失败,并且无法确定运行时或驱动程序的有效性。
  • cudaErrorStartupFailure = 127
    这表明CUDA运行时内部启动失败。
  • cudaErrorInvalidKernelImage = 200
    这表明设备内核映像无效。
  • cudaErrorDeviceUninitialized = 201
    这最经常表示没有上下文绑定到当前线程。如果传递给API调用的上下文不是有效的句柄(例如,已对其调用cuCtxDestroy()的上下文),也可以返回此值。如果用户混合使用不同的API版本(即3010上下文和3020 API调用),也可以返回此值。有关更多详细信息,请参见cuCtxGetApiVersion()。
  • cudaErrorMapBufferObjectFailed = 205
    这表明缓冲区对象无法映射。
  • cudaErrorUnmapBufferObjectFailed = 206
    这表明不能取消映射缓冲区对象。
  • cudaErrorArrayIsMapped = 207
    这表明指定的数组当前正在映射,因此无法销毁。
  • cudaErrorAlreadyMapped = 208
    这表明资源已被映射。
  • cudaErrorNoKernelImageForDevice = 209
    这表明没有适用于该设备的内核映像。当用户为特定CUDA源文件指定不包括相应设备配置的代码生成选项时,可能会发生这种情况。
  • cudaErrorAlreadyAcquired = 210
    这表明资源已经被获取。
  • cudaErrorNotMapped = 211
    这表明资源未映射。
  • cudaErrorNotMappedAsArray = 212
    这表明映射的资源不可作为数组访问。
  • cudaErrorNotMappedAsPointer = 213
    这表明映射的资源不可作为指针访问。
  • cudaErrorECCUncorrectable = 214
    这表明在执行过程中检测到不可纠正的ECC错误。
  • cudaErrorUnsupportedLimit = 215
    这表明活动设备不支持传递给API调用的cudaLimit。
  • cudaErrorDeviceAlreadyInUse = 216
    这表明调用试图访问已由其他线程使用的独占线程设备。
  • cudaErrorPeerAccessUnsupported = 217
    此错误表明在给定的设备上不支持P2P访问。
  • cudaErrorInvalidPtx = 218
    PTX编译失败。如果应用程序不包含适用于当前设备的二进制文件,则运行时可能会退回到编译PTX。
  • cudaErrorInvalidGraphicsCOntext= 219
    这表示OpenGL或DirectX上下文错误。
  • cudaErrorNvlinkUncorrectable = 220
    这表明在执行过程中检测到不可纠正的NVLink错误。
  • cudaErrorJitCompilerNotFound = 221
    这表明未找到PTX JIT编译器库。 JIT编译器库用于PTX编译。如果应用程序不包含适用于当前设备的二进制文件,则运行时可能会退回到编译PTX。
  • cudaErrorUnsupportedPtxVersion = 222
    这表明提供的PTX是使用不受支持的工具链编译的。最常见的原因是PTX是由比CUDA驱动程序和PTX JIT编译器支持的编译器更新的编译器生成的。
  • cudaErrorJitCompilatiOnDisabled= 223
    这表明JIT编译已禁用。 JIT编译将编译PTX。如果应用程序不包含适用于当前设备的二进制文件,则运行时可能会退回到编译PTX。
  • cudaErrorInvalidSource = 300
    这表明设备内核源无效。
  • cudaErrorFileNotFound = 301
    这表明找不到指定的文件。
  • cudaErrorSharedObjectSymbolNotFound = 302
    这表明指向共享库的链接无法解析。
  • cudaErrorSharedObjectInitFailed = 303
    这表明共享对象的初始化失败。
  • cudaErrorOperatingSystem = 304
    此错误表明OS调用失败。
  • cudaErrorInvalidResourceHandle = 400
    这表明传递给API调用的资源句柄无效。资源句柄是不透明的类型,例如cudaStream_t和cudaEvent_t。
  • cudaErrorIllegalState = 401
    这表明API调用所需的资源未处于有效状态以执行请求的操作。
  • cudaErrorSymbolNotFound = 500
    这表明未找到命名符号。符号的示例是全局/常量变量名称,纹理名称和表面名称。
  • cudaErrorNotReady = 600
    这表明先前发出的异步操作尚未完成。该结果实际上不是错误,但是必须与cudaSuccess(指示完成)的显示方式有所不同。可能返回此值的调用包括cudaEventQuery()和cudaStreamQuery()。
  • cudaErrorIllegalAddress = 700
    设备在无效的存储器地址上遇到了加载或存储指令。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorLaunchOutOfResources = 701
    这表明没有启动是因为它没有适当的资源。尽管此错误与- cudaErrorInvalidConfiguration相似,但此错误通常表明用户尝试向设备内核传递太多参数,或者内核启动为内核的寄存器计数指定了太多线程。
  • cudaErrorLaunchTimeout = 702
    这表明设备内核执行所需的时间太长。仅在启用超时的情况下才会发生这种情况-有关更多信息,请参见设备属性kernelExecTimeoutEnabled。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorLaunchIncompatibleTexturing = 703
    该错误表明内核启动使用了不兼容的纹理模式。
  • cudaErrorPeerAccessAlreadyEnabled = 704
    此错误表明对cudaDeviceEnablePeerAccess()的调用正在尝试从已启用对等寻址的上下文中重新启用对等寻址。
  • cudaErrorPeerAccessNotEnabled = 705
    此错误表明cudaDeviceDisablePeerAccess()试图禁用尚未通过cudaDeviceEnablePeerAccess()启用的对等寻址。
  • cudaErrorSetOnActiveProcess= 708
    这表示用户通过调用非设备管理实例初始化CUDA运行时并初始化内核后,已经调用了cudaSetValidDevices(),cudaSetDeviceFlags(),cudaD3D9SetDirect3DDevice(),cudaD3D10SetDirect3DDevice,cudaD3D11SetDirect3DDevice()或cudaVDPAUSetVDPAUDevice()。非设备管理操作)。如果使用运行时/驱动程序互操作性且主机线程上存在现有的CUcontext,则也可以返回此错误。
  • cudaErrorCOntextIsDestroyed= 709
    该错误表明调用方线程的当前上下文已使用cuCtxDestroy破坏,或者是尚未初始化的主要上下文。
  • cudaErrorAssert = 710
    在内核执行期间在设备代码中触发的断言。该设备无法再次使用。所有现有分配均无效。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorTooManyPeers = 711
    此错误表明,传递给cudaEnablePeerAccess()的一个或多个设备已耗尽了启用对等访问所需的硬件资源。
  • cudaErrorHostMemoryAlreadyRegistered = 712
    此错误表明传递给cudaHostRegister()的内存范围已被注册。
  • cudaErrorHostMemoryNotRegistered = 713
    此错误表明传递给cudaHostUnregister()的指针与任何当前注册的内存区域都不对应。
  • cudaErrorHardwareStackError = 714
    设备在内核执行期间在调用堆栈中遇到错误,可能是由于堆栈损坏或超出堆栈大小限制所致。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorIllegalInstruction = 715
    设备在内核执行期间遇到了非法指令,这使进程处于不一致状态,任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorMisalignedAddress = 716
    设备在未对齐的存储器地址上遇到了加载或存储指令。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorInvalidAddressSpace = 717
    在执行内核时,设备遇到一条指令,该指令只能在某些地址空间(全局,共享或本地)中的存储器位置上操作,但被提供了不属于允许的地址空间的存储器地址。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorInvalidPc = 718
    设备遇到无效的程序计数器。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorLaunchFailure = 719
    执行内核时设备上发生了异常。常见原因包括取消引用无效的设备指针和访问共享内存超出范围。不太常见的情况可能是系统特定的-有关这些情况的更多信息,请参见系统特定的用户指南。这会使进程处于不一致状态,并且任何进一步的CUDA工作都将返回相同的错误。要继续使用CUDA,必须终止该过程并重新启动。
  • cudaErrorCooperativeLaunchTooLarge = 720
    此错误表示对于通过cudaLaunchCooperativeKernel或cudaLaunchCooperativeKernelMultiDevice启动的内核,每个网格启动的块数超过了cudaOccupancyMaxActiveBlocksPerMultiprocessor或cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags所允许的最大块数乘以deviceCountev指定的多处理器数量。
  • cudaErrorNotPermitted = 800
    该错误表明尝试的操作是不允许的。
  • cudaErrorNotSupported = 801
    此错误表明当前系统或设备不支持尝试的操作。
  • cudaErrorSystemNotReady = 802
    此错误表明系统尚未准备好开始任何CUDA工作。要继续使用CUDA,请确认系统配置处于有效状态,并且所有必需的驱动程序守护程序都正在运行。有关此错误的更多信息,请参见系统特定的用户指南。
  • cudaErrorSystemDriverMismatch = 803
    此错误表明显示驱动程序和CUDA驱动程序的版本不匹配。有关支持的版本,请参阅兼容性文档。
  • cudaErrorCompatNotSupportedOnDevice= 804
    该错误表明系统已升级为可以向前兼容运行,但是CUDA检测到的可见硬件不支持此配置。有关支持的硬件矩阵,请参阅兼容性文档,或通过CUDA_VISIBLE_DEVICES环境变量确保在初始化期间仅可见支持的硬件。
  • cudaErrorStreamCaptureUnsupported = 900
    捕获流时,不允许该操作。
  • cudaErrorStreamCaptureInvalidated = 901
    由于先前的错误,流上的当前捕获序列已无效。
  • cudaErrorStreamCaptureMerge = 902
    该操作将导致两个独立捕获序列的合并。
  • cudaErrorStreamCaptureUnmatched = 903
    捕获未在此流中启动。
  • cudaErrorStreamCaptureUnjoined = 904
    捕获序列包含一个未加入主流的分支。
  • cudaErrorStreamCaptureIsolation = 905
    将创建一个跨越捕获序列边界的依赖项。仅允许隐式流内顺序依赖项跨越边界。
  • cudaErrorStreamCaptureImplicit = 906
    该操作将导致对来自cudaStreamLegacy的当前捕获序列的隐式依赖。
  • cudaErrorCapturedEvent = 907
    对于最后记录在捕获流中的事件,不允许执行该操作。
  • cudaErrorStreamCaptureWrOngThread= 908
    未使用cudaStreamBeginCapture的cudaStreamCaptureModeRelaxed参数启动的流捕获序列已在另一个线程中传递给cudaStreamEndCapture。
  • cudaErrorTimeout = 909
    这表明等待操作已超时。
  • cudaErrorGraphExecUpdateFailure = 910
    此错误表示未执行图形更新,因为它包含违反特定于实例化图形更新的约束的更改。
  • cudaErrorUnknown = 999
    这表明发生了未知的内部错误。

参考资料:CUDA官方文档

CUDA Error types

  • cudaSuccess = 0
    The API call returned with no errors. In the case of query calls, this also means that the operation being queried is complete (see cudaEventQuery() and cudaStreamQuery()).

  • cudaErrorInvalidValue = 1
    This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.

  • cudaErrorMemoryAllocation = 2
    The API call failed because it was unable to allocate enough memory to perform the requested operation.

  • cudaErrorInitializatiOnError= 3
    The API call failed because the CUDA driver and runtime could not be initialized.

  • cudaErrorCudartUnloading = 4
    This indicates that a CUDA Runtime API call cannot be executed because it is being called during process shut down, at a point in time after CUDA driver has been unloaded.

  • cudaErrorProfilerDisabled = 5
    This indicates profiler is not initialized for this run. This can happen when the application is running with external profiling tools like visual profiler.

  • cudaErrorProfilerNotInitialized = 6
    Deprecated
    This error return is deprecated as of CUDA 5.0. It is no longer an error to attempt to enable/disable the profiling via cudaProfilerStart or cudaProfilerStop without initialization.

  • cudaErrorProfilerAlreadyStarted = 7
    Deprecated
    This error return is deprecated as of CUDA 5.0. It is no longer an error to call cudaProfilerStart() when profiling is already enabled.

  • cudaErrorProfilerAlreadyStopped = 8
    Deprecated
    This error return is deprecated as of CUDA 5.0. It is no longer an error to call cudaProfilerStop() when profiling is already disabled.

  • cudaErrorInvalidCOnfiguration= 9
    This indicates that a kernel launch is requesting resources that can never be satisfied by the current device. Requesting more shared memory per block than the device supports will trigger this error, as will requesting too many threads or blocks. See cudaDeviceProp for more device limitations.

  • cudaErrorInvalidPitchValue = 12
    This indicates that one or more of the pitch-related parameters passed to the API call is not within the acceptable range for pitch.

  • cudaErrorInvalidSymbol = 13
    This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier.

  • cudaErrorInvalidHostPointer = 16
    Deprecated
    This error return is deprecated as of CUDA 10.1.

This indicates that at least one host pointer passed to the API call is not a valid host pointer.

  • cudaErrorInvalidDevicePointer = 17
    Deprecated
    This error return is deprecated as of CUDA 10.1.

This indicates that at least one device pointer passed to the API call is not a valid device pointer.

  • cudaErrorInvalidTexture = 18
    This indicates that the texture passed to the API call is not a valid texture.
  • cudaErrorInvalidTextureBinding = 19
    This indicates that the texture binding is not valid. This occurs if you call cudaGetTextureAlignmentOffset() with an unbound texture.
  • cudaErrorInvalidChannelDescriptor = 20
    This indicates that the channel descriptor passed to the API call is not valid. This occurs if the format is not one of the formats specified by cudaChannelFormatKind, or if one of the dimensions is invalid.
  • cudaErrorInvalidMemcpyDirection = 21
    This indicates that the direction of the memcpy passed to the API call is not one of the types specified by cudaMemcpyKind.
  • cudaErrorAddressOfCOnstant= 22
    Deprecated
    This error return is deprecated as of CUDA 3.1. Variables in constant memory may now have their address taken by the runtime via cudaGetSymbolAddress().

This indicated that the user has taken the address of a constant variable, which was forbidden up until the CUDA 3.1 release.

  • cudaErrorTextureFetchFailed = 23
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that a texture fetch was not able to be performed. This was previously used for device emulation of texture operations.

  • cudaErrorTextureNotBound = 24
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that a texture was not bound for access. This was previously used for device emulation of texture operations.

  • cudaErrorSynchrOnizationError= 25
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that a synchronization operation had failed. This was previously used for some device emulation functions.

  • cudaErrorInvalidFilterSetting = 26
    This indicates that a non-float texture was being accessed with linear filtering. This is not supported by CUDA.
  • cudaErrorInvalidNormSetting = 27
    This indicates that an attempt was made to read a non-float texture as a normalized float. This is not supported by CUDA.
  • cudaErrorMixedDeviceExecution = 28
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

Mixing of device and device emulation code was not allowed.

  • cudaErrorNotYetImplemented = 31
    Deprecated
    This error return is deprecated as of CUDA 4.1.

This indicates that the API call is not yet implemented. Production releases of CUDA will never return this error.

  • cudaErrorMemoryValueTooLarge = 32
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.

This indicated that an emulated device pointer exceeded the 32-bit address range.

  • cudaErrorStubLibrary = 34
    This indicates that the CUDA driver that the application has loaded is a stub library. Applications that run with the stub rather than a real driver loaded will result in CUDA API returning this error.

  • cudaErrorInsufficientDriver = 35
    This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library. This is not a supported configuration. Users should install an updated NVIDIA display driver to allow the application to run.

  • cudaErrorCallRequiresNewerDriver = 36
    This indicates that the API call requires a newer CUDA driver than the one currently installed. Users should install an updated NVIDIA CUDA driver to allow the API call to succeed.

  • cudaErrorInvalidSurface = 37
    This indicates that the surface passed to the API call is not a valid surface.

  • cudaErrorDuplicateVariableName = 43
    This indicates that multiple global or constant variables (across separate CUDA source files in the application) share the same string name.

  • cudaErrorDuplicateTextureName = 44
    This indicates that multiple textures (across separate CUDA source files in the application) share the same string name.

  • cudaErrorDuplicateSurfaceName = 45
    This indicates that multiple surfaces (across separate CUDA source files in the application) share the same string name.

  • cudaErrorDevicesUnavailable = 46
    This indicates that all CUDA devices are busy or unavailable at the current time. Devices are often busy/unavailable due to use of cudaComputeModeExclusive, cudaComputeModeProhibited or when long running CUDA kernels have filled up the GPU and are blocking new work from starting. They can also be unavailable due to memory constraints on a device that already has active CUDA work being performed.

  • cudaErrorIncompatibleDriverCOntext= 49
    This indicates that the current context is not compatible with this the CUDA Runtime. This can only occur if you are using CUDA Runtime/Driver interoperability and have created an existing Driver context using the driver API. The Driver context may be incompatible either because the Driver context was created using an older version of the API, because the Runtime API call expects a primary driver context and the Driver context is not primary, or because the Driver context has been destroyed. Please see Interactions with the CUDA Driver API" for more information.

  • cudaErrorMissingCOnfiguration= 52
    The device function being invoked (usually via cudaLaunchKernel()) was not previously configured via the cudaConfigureCall() function.

  • cudaErrorPriorLaunchFailure = 53
    Deprecated
    This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
    This indicated that a previous kernel launch failed. This was previously used for device emulation of kernel launches.

  • cudaErrorLaunchMaxDepthExceeded = 65
    This error indicates that a device runtime grid launch did not occur because the depth of the child grid would exceed the maximum supported number of nested grid launches.

  • cudaErrorLaunchFileScopedTex = 66
    This error indicates that a grid launch did not occur because the kernel uses file-scoped textures which are unsupported by the device runtime. Kernels launched via the device runtime only support textures created with the Texture Object API’s.

  • cudaErrorLaunchFileScopedSurf = 67
    This error indicates that a grid launch did not occur because the kernel uses file-scoped surfaces which are unsupported by the device runtime. Kernels launched via the device runtime only support surfaces created with the Surface Object API’s.

  • cudaErrorSyncDepthExceeded = 68
    This error indicates that a call to cudaDeviceSynchronize made from the device runtime failed because the call was made at grid depth greater than than either the default (2 levels of grids) or user specified device limit cudaLimitDevRuntimeSyncDepth. To be able to synchronize on launched grids at a greater depth successfully, the maximum nested depth at which cudaDeviceSynchronize will be called must be specified with the cudaLimitDevRuntimeSyncDepth limit to the cudaDeviceSetLimit api before the host-side launch of a kernel using the device runtime. Keep in mind that additional levels of sync depth require the runtime to reserve large amounts of device memory that cannot be used for user allocations.

  • cudaErrorLaunchPendingCountExceeded = 69
    This error indicates that a device runtime grid launch failed because the launch would exceed the limit cudaLimitDevRuntimePendingLaunchCount. For this launch to proceed successfully, cudaDeviceSetLimit must be called to set the cudaLimitDevRuntimePendingLaunchCount to be higher than the upper bound of outstanding launches that can be issued to the device runtime. Keep in mind that raising the limit of pending device runtime launches will require the runtime to reserve device memory that cannot be used for user allocations.

  • cudaErrorInvalidDeviceFunction = 98
    The requested device function does not exist or is not compiled for the proper device architecture.

  • cudaErrorNoDevice = 100
    This indicates that no CUDA-capable devices were detected by the installed CUDA driver.

  • cudaErrorInvalidDevice = 101
    This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device.

  • cudaErrorDeviceNotLicensed = 102
    This indicates that the device doesn’t have a valid Grid License.

  • cudaErrorSoftwareValidityNotEstablished = 103
    By default, the CUDA runtime may perform a minimal set of self-tests, as well as CUDA driver tests, to establish the validity of both. Introduced in CUDA 11.2, this error return indicates that at least one of these tests has failed and the validity of either the runtime or the driver could not be established.

  • cudaErrorStartupFailure = 127
    This indicates an internal startup failure in the CUDA runtime.

  • cudaErrorInvalidKernelImage = 200
    This indicates that the device kernel image is invalid.

  • cudaErrorDeviceUninitialized = 201
    This most frequently indicates that there is no context bound to the current thread. This can also be returned if the context passed to an API call is not a valid handle (such as a context that has had cuCtxDestroy() invoked on it). This can also be returned if a user mixes different API versions (i.e. 3010 context with 3020 API calls). See cuCtxGetApiVersion() for more details.

  • cudaErrorMapBufferObjectFailed = 205
    This indicates that the buffer object could not be mapped.

  • cudaErrorUnmapBufferObjectFailed = 206
    This indicates that the buffer object could not be unmapped.

  • cudaErrorArrayIsMapped = 207
    This indicates that the specified array is currently mapped and thus cannot be destroyed.

  • cudaErrorAlreadyMapped = 208
    This indicates that the resource is already mapped.

  • cudaErrorNoKernelImageForDevice = 209
    This indicates that there is no kernel image available that is suitable for the device. This can occur when a user specifies code generation options for a particular CUDA source file that do not include the corresponding device configuration.

  • cudaErrorAlreadyAcquired = 210
    This indicates that a resource has already been acquired.

  • cudaErrorNotMapped = 211
    This indicates that a resource is not mapped.

  • cudaErrorNotMappedAsArray = 212
    This indicates that a mapped resource is not available for access as an array.

  • cudaErrorNotMappedAsPointer = 213
    This indicates that a mapped resource is not available for access as a pointer.

  • cudaErrorECCUncorrectable = 214
    This indicates that an uncorrectable ECC error was detected during execution.

  • cudaErrorUnsupportedLimit = 215
    This indicates that the cudaLimit passed to the API call is not supported by the active device.

  • cudaErrorDeviceAlreadyInUse = 216
    This indicates that a call tried to access an exclusive-thread device that is already in use by a different thread.

  • cudaErrorPeerAccessUnsupported = 217
    This error indicates that P2P access is not supported across the given devices.

  • cudaErrorInvalidPtx = 218
    A PTX compilation failed. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

  • cudaErrorInvalidGraphicsCOntext= 219
    This indicates an error with the OpenGL or DirectX context.

  • cudaErrorNvlinkUncorrectable = 220
    This indicates that an uncorrectable NVLink error was detected during the execution.

  • cudaErrorJitCompilerNotFound = 221
    This indicates that the PTX JIT compiler library was not found. The JIT Compiler library is used for PTX compilation. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

  • cudaErrorUnsupportedPtxVersion = 222
    This indicates that the provided PTX was compiled with an unsupported toolchain. The most common reason for this, is the PTX was generated by a compiler newer than what is supported by the CUDA driver and PTX JIT compiler.

  • cudaErrorJitCompilatiOnDisabled= 223
    This indicates that the JIT compilation was disabled. The JIT compilation compiles PTX. The runtime may fall back to compiling PTX if an application does not contain a suitable binary for the current device.

  • cudaErrorInvalidSource = 300
    This indicates that the device kernel source is invalid.

  • cudaErrorFileNotFound = 301
    This indicates that the file specified was not found.

  • cudaErrorSharedObjectSymbolNotFound = 302
    This indicates that a link to a shared object failed to resolve.

  • cudaErrorSharedObjectInitFailed = 303
    This indicates that initialization of a shared object failed.

  • cudaErrorOperatingSystem = 304
    This error indicates that an OS call failed.

  • cudaErrorInvalidResourceHandle = 400
    This indicates that a resource handle passed to the API call was not valid. Resource handles are opaque types like cudaStream_t and cudaEvent_t.

  • cudaErrorIllegalState = 401
    This indicates that a resource required by the API call is not in a valid state to perform the requested operation.

  • cudaErrorSymbolNotFound = 500
    This indicates that a named symbol was not found. Examples of symbols are global/constant variable names, texture names, and surface names.

  • cudaErrorNotReady = 600
    This indicates that asynchronous operations issued previously have not completed yet. This result is not actually an error, but must be indicated differently than cudaSuccess (which indicates completion). Calls that may return this value include cudaEventQuery() and cudaStreamQuery().

  • cudaErrorIllegalAddress = 700
    The device encountered a load or store instruction on an invalid memory address. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorLaunchOutOfResources = 701
    This indicates that a launch did not occur because it did not have appropriate resources. Although this error is similar to - cudaErrorInvalidConfiguration, this error usually indicates that the user has attempted to pass too many arguments to the device kernel, or the kernel launch specifies too many threads for the kernel’s register count.

  • cudaErrorLaunchTimeout = 702
    This indicates that the device kernel took too long to execute. This can only occur if timeouts are enabled - see the device property kernelExecTimeoutEnabled for more information. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorLaunchIncompatibleTexturing = 703
    This error indicates a kernel launch that uses an incompatible texturing mode.

  • cudaErrorPeerAccessAlreadyEnabled = 704
    This error indicates that a call to cudaDeviceEnablePeerAccess() is trying to re-enable peer addressing on from a context which has already had peer addressing enabled.

  • cudaErrorPeerAccessNotEnabled = 705
    This error indicates that cudaDeviceDisablePeerAccess() is trying to disable peer addressing which has not been enabled yet via cudaDeviceEnablePeerAccess().

  • cudaErrorSetOnActiveProcess= 708
    This indicates that the user has called cudaSetValidDevices(), cudaSetDeviceFlags(), cudaD3D9SetDirect3DDevice(), cudaD3D10SetDirect3DDevice, cudaD3D11SetDirect3DDevice(), or cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by calling non-device management operations (allocating memory and launching kernels are examples of non-device management operations). This error can also be returned if using runtime/driver interoperability and there is an existing CUcontext active on the host thread.

  • cudaErrorCOntextIsDestroyed= 709
    This error indicates that the context current to the calling thread has been destroyed using cuCtxDestroy, or is a primary context which has not yet been initialized.

  • cudaErrorAssert = 710
    An assert triggered in device code during kernel execution. The device cannot be used again. All existing allocations are invalid. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorTooManyPeers = 711
    This error indicates that the hardware resources required to enable peer access have been exhausted for one or more of the devices passed to cudaEnablePeerAccess().

  • cudaErrorHostMemoryAlreadyRegistered = 712
    This error indicates that the memory range passed to cudaHostRegister() has already been registered.

  • cudaErrorHostMemoryNotRegistered = 713
    This error indicates that the pointer passed to cudaHostUnregister() does not correspond to any currently registered memory region.

  • cudaErrorHardwareStackError = 714
    Device encountered an error in the call stack during kernel execution, possibly due to stack corruption or exceeding the stack size limit. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorIllegalInstruction = 715
    The device encountered an illegal instruction during kernel execution This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorMisalignedAddress = 716
    The device encountered a load or store instruction on a memory address which is not aligned. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorInvalidAddressSpace = 717
    While executing a kernel, the device encountered an instruction which can only operate on memory locations in certain address spaces (global, shared, or local), but was supplied a memory address not belonging to an allowed address space. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorInvalidPc = 718
    The device encountered an invalid program counter. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorLaunchFailure = 719
    An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. Less common cases can be system specific - more information about these cases can be found in the system specific user guide. This leaves the process in an inconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched.

  • cudaErrorCooperativeLaunchTooLarge = 720
    This error indicates that the number of blocks launched per grid for a kernel that was launched via either cudaLaunchCooperativeKernel or cudaLaunchCooperativeKernelMultiDevice exceeds the maximum number of blocks as allowed by cudaOccupancyMaxActiveBlocksPerMultiprocessor or cudaOccupancyMaxActiveBlocksPerMultiprocessorWithFlags times the number of multiprocessors as specified by the device attribute cudaDevAttrMultiProcessorCount.

  • cudaErrorNotPermitted = 800
    This error indicates the attempted operation is not permitted.

  • cudaErrorNotSupported = 801
    This error indicates the attempted operation is not supported on the current system or device.

  • cudaErrorSystemNotReady = 802
    This error indicates that the system is not yet ready to start any CUDA work. To continue using CUDA, verify the system configuration is in a valid state and all required driver daemons are actively running. More information about this error can be found in the system specific user guide.

  • cudaErrorSystemDriverMismatch = 803
    This error indicates that there is a mismatch between the versions of the display driver and the CUDA driver. Refer to the compatibility documentation for supported versions.

  • cudaErrorCompatNotSupportedOnDevice= 804
    This error indicates that the system was upgraded to run with forward compatibility but the visible hardware detected by CUDA does not support this configuration. Refer to the compatibility documentation for the supported hardware matrix or ensure that only supported hardware is visible during initialization via the CUDA_VISIBLE_DEVICES environment variable.

  • cudaErrorStreamCaptureUnsupported = 900
    The operation is not permitted when the stream is capturing.

  • cudaErrorStreamCaptureInvalidated = 901
    The current capture sequence on the stream has been invalidated due to a previous error.

  • cudaErrorStreamCaptureMerge = 902
    The operation would have resulted in a merge of two independent capture sequences.

  • cudaErrorStreamCaptureUnmatched = 903
    The capture was not initiated in this stream.

  • cudaErrorStreamCaptureUnjoined = 904
    The capture sequence contains a fork that was not joined to the primary stream.

  • cudaErrorStreamCaptureIsolation = 905
    A dependency would have been created which crosses the capture sequence boundary. Only implicit in-stream ordering dependencies are allowed to cross the boundary.

  • cudaErrorStreamCaptureImplicit = 906
    The operation would have resulted in a disallowed implicit dependency on a current capture sequence from cudaStreamLegacy.

  • cudaErrorCapturedEvent = 907
    The operation is not permitted on an event which was last recorded in a capturing stream.

  • cudaErrorStreamCaptureWrOngThread= 908
    A stream capture sequence not initiated with the cudaStreamCaptureModeRelaxed argument to cudaStreamBeginCapture was passed to cudaStreamEndCapture in a different thread.

  • cudaErrorTimeout = 909
    This indicates that the wait operation has timed out.

  • cudaErrorGraphExecUpdateFailure = 910
    This error indicates that the graph update was not performed because it included changes which violated constraints specific to instantiated graph update.

  • cudaErrorUnknown = 999
    This indicates that an unknown internal error has occurred.

  • cudaErrorApiFailureBase = 10000
    Deprecated
    This error return is deprecated as of CUDA 4.1.

Any unhandled CUDA driver error is added to this value and returned via the runtime. Production releases of CUDA should not return such errors.

参考链接

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html#group__CUDART__TYPES_1g3f51e3575c2178246db0a94a430e0038


推荐阅读
  • 本文介绍了Windows操作系统的版本及其特点,包括Windows 7系统的6个版本:Starter、Home Basic、Home Premium、Professional、Enterprise、Ultimate。Windows操作系统是微软公司研发的一套操作系统,具有人机操作性优异、支持的应用软件较多、对硬件支持良好等优点。Windows 7 Starter是功能最少的版本,缺乏Aero特效功能,没有64位支持,最初设计不能同时运行三个以上应用程序。 ... [详细]
  • web.py开发web 第八章 Formalchemy 服务端验证方法
    本文介绍了在web.py开发中使用Formalchemy进行服务端表单数据验证的方法。以User表单为例,详细说明了对各字段的验证要求,包括必填、长度限制、唯一性等。同时介绍了如何自定义验证方法来实现验证唯一性和两个密码是否相等的功能。该文提供了相关代码示例。 ... [详细]
  • Spring常用注解(绝对经典),全靠这份Java知识点PDF大全
    本文介绍了Spring常用注解和注入bean的注解,包括@Bean、@Autowired、@Inject等,同时提供了一个Java知识点PDF大全的资源链接。其中详细介绍了ColorFactoryBean的使用,以及@Autowired和@Inject的区别和用法。此外,还提到了@Required属性的配置和使用。 ... [详细]
  • Linux服务器密码过期策略、登录次数限制、私钥登录等配置方法
    本文介绍了在Linux服务器上进行密码过期策略、登录次数限制、私钥登录等配置的方法。通过修改配置文件中的参数,可以设置密码的有效期、最小间隔时间、最小长度,并在密码过期前进行提示。同时还介绍了如何进行公钥登录和修改默认账户用户名的操作。详细步骤和注意事项可参考本文内容。 ... [详细]
  • 本文介绍了在mac环境下使用nginx配置nodejs代理服务器的步骤,包括安装nginx、创建目录和文件、配置代理的域名和日志记录等。 ... [详细]
  • 在springmvc框架中,前台ajax调用方法,对图片批量下载,如何弹出提示保存位置选框?Controller方法 ... [详细]
  • 本文记录了在vue cli 3.x中移除console的一些采坑经验,通过使用uglifyjs-webpack-plugin插件,在vue.config.js中进行相关配置,包括设置minimizer、UglifyJsPlugin和compress等参数,最终成功移除了console。同时,还包括了一些可能出现的报错情况和解决方法。 ... [详细]
  • Android工程师面试准备及设计模式使用场景
    本文介绍了Android工程师面试准备的经验,包括面试流程和重点准备内容。同时,还介绍了建造者模式的使用场景,以及在Android开发中的具体应用。 ... [详细]
  • 本文介绍了一个适用于PHP应用快速接入TRX和TRC20数字资产的开发包,该开发包支持使用自有Tron区块链节点的应用场景,也支持基于Tron官方公共API服务的轻量级部署场景。提供的功能包括生成地址、验证地址、查询余额、交易转账、查询最新区块和查询交易信息等。详细信息可参考tron-php的Github地址:https://github.com/Fenguoz/tron-php。 ... [详细]
  • 本文介绍了Sencha Touch的学习使用心得,主要包括搭建项目框架的过程。作者强调了使用MVC模式的重要性,并提供了一个干净的引用示例。文章还介绍了Index.html页面的作用,以及如何通过链接样式表来改变全局风格。 ... [详细]
  • 大数据Hadoop生态(20)MapReduce框架原理OutputFormat的开发笔记
    本文介绍了大数据Hadoop生态(20)MapReduce框架原理OutputFormat的开发笔记,包括outputFormat接口实现类、自定义outputFormat步骤和案例。案例中将包含nty的日志输出到nty.log文件,其他日志输出到other.log文件。同时提供了一些相关网址供参考。 ... [详细]
  • 本文讨论了如何使用GStreamer来删除H264格式视频文件中的中间部分,而不需要进行重编码。作者提出了使用gst_element_seek(...)函数来实现这个目标的思路,并提到遇到了一个解决不了的BUG。文章还列举了8个解决方案,希望能够得到更好的思路。 ... [详细]
  • AFNetwork框架(零)使用NSURLSession进行网络请求
    本文介绍了AFNetwork框架中使用NSURLSession进行网络请求的方法,包括NSURLSession的配置、请求的创建和执行等步骤。同时还介绍了NSURLSessionDelegate和NSURLSessionConfiguration的相关内容。通过本文可以了解到AFNetwork框架中使用NSURLSession进行网络请求的基本流程和注意事项。 ... [详细]
  • 本文介绍了解决java开源项目apache commons email简单使用报错的方法,包括使用正确的JAR包和正确的代码配置,以及相关参数的设置。详细介绍了如何使用apache commons email发送邮件。 ... [详细]
  • 使用freemaker生成Java代码的步骤及示例代码
    本文介绍了使用freemaker这个jar包生成Java代码的步骤,通过提前编辑好的模板,可以避免写重复代码。首先需要在springboot的pom.xml文件中加入freemaker的依赖包。然后编写模板,定义要生成的Java类的属性和方法。最后编写生成代码的类,通过加载模板文件和数据模型,生成Java代码文件。本文提供了示例代码,并展示了文件目录结构。 ... [详细]
author-avatar
a13786812476
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有