热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

FFmpeg新版本(2016年10月份以后)支持硬件解码

FFmpegprovidesasubsystemforhardwareacceleration.Hardwareaccelerationallowstousespecific

FFmpeg provides a subsystem for hardware acceleration.

Hardware acceleration allows to use specific devices (usually graphical card or other specific devices) to perform multimedia processing. This allows to use dedicated hardware to perform demanding computation while freeing the CPU from such computations. Typically hardware acceleration enables specific hardware devices (usually the GPU) to perform operations related to decoding and encoding video streams, or filtering video.

When using FFmpeg the tool, HW-assisted decoding is enabled using through the -hwaccel option, which enables a specific decoder. Each decoder may have specific limitations (for example an H.264 decoder may only support baseline profile). HW-assisted encoding is enabled through the use of a specific encoder (for example h264_nvenc). Filtering HW-assisted processing is only supported in a few filters, and in that case you enable the OpenCL code through a filter option.

There are several hardware acceleration standards API, some of which are supported to some extent by FFmpeg.

Platforms overview

API availability

  Linux Intel Linux NVIDIA Windows Intel Windows NVIDIA OS X Android iOS Raspberry Pi
CUDA N Y N Y Y N N N
Direct3D 11 N N Y Y N N N N
DXVA2 N N Y Y N N N N
MediaCodec N N N N N Y N N
MMAL N N N N N N N Y
NVENC N Y N Y N N N N
OpenCL Y Y Y Y Y N N N
Quick Sync Y N Y N N N N N
VA-API Y Y* N N N N N N
VDA† N N N N Y N N N
VDPAU N Y N N N N N N
VideoToolbox N N N N Y N Y N
XvMC Y Y N N N N N N

* Semi-maintained.

† Deprecated by upstream.

FFmpeg implementations

  AVHWAccel Decoder Encoder CLI Filtering AVHWFramesContext
CUDA1 Y Y N2 Y Y Y
Direct3D 11 Y N N/A N N N
DXVA2 Y N N/A Y N Y
MediaCodec Y Y N N/A N/A N
MMAL Y Y N/A N N/A N
NVENC N/A N3 Y Y N/A N
OpenCL N/A N/A N/A N/A Y N
Quick Sync Y Y Y Y N N*
VA-API Y N Y Y Y Y
VDA Y Y N/A Y N/A N
VDPAU Y N† N/A Y N Y
VideoToolbox Y N Y Y N N
XvMC Y N† N/A N N/A N

N/A This feature is not directly supported by the API, or is not currently implementable.

* Work in progress. If "Y" is indicated, infrastructure is in place but no filters have been implemented yet.

† Actually yes, but is deprecated for technical reasons and should not be used.

1 Also known as "CUDA Video Decoding API" or "CUVID" or "NvDecode?".

2 See NVENC

3 See CUDA

VDPAU

​Video Decode and Presentation API for Unix. Developed by NVidia for UNIX/Linux systems. To enable this you typically need the libvdpau development package in your distribution, and a compatible graphic card.

Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be done at the application level (check for example the ffmpeg_vdpau.c file used by ffmpeg.c). Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).

Several decoders are currently supported through VDPAU in libavcodec, in particular H.264, MPEG-1/2/4, and VC-1.

XvMC

XVideo Motion Compensation. This is an extension of the X video extension (Xv) for the X Window System (and thus again only available only on UNIX/Linux).

Official specification is available here: ​http://www.xfree86.org/~mvojkovi/XvMC_API.txt

VA-API

Video Acceleration API (VA API) is a non-proprietary and royalty-free open source software library ("libVA") and API specification, initially developed by Intel but can be used in combination with other devices. Linux only: ​https://en.wikipedia.org/wiki/Video_Acceleration_API

DXVA2

Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).

Link to MSDN documentation: ​http://msdn.microsoft.com/en-us/library/windows/desktop/cc307941%28v=vs.85%29.aspx

Several decoders are currently supported, in particular H.264, MPEG2, VC1 and WMV3.

DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header. For MinGW this can be done by downloading the header maintained by VLC:

​http://download.videolan.org/pub/contrib/dxva2api.h

and installing it in the include patch (for example in /usr/include/).

For MinGW64, the dxva2api.h is provided by default. One way to install mingw-w64 is through a pacman repository, and can be installed using one of the two following commands, depending on the architecture:

pacman -S mingw-w64-i686-gcc
pacman -S mingw-w64-x86_64-gcc

To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.

To test decoding, use the following command:

ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark

VDA

Video Decoding API, only supported on MAC. H.264 decoding is available in FFmpeg/libavcodec.

Developers documentation: ​https://developer.apple.com/library/mac/technotes/tn2267/_index.html

NVENC

NVENC is an API developed by NVIDIA which enables the use of NVIDIA GPU cards to perform H.264 and HEVC encoding. FFmpeg supports NVENC through the h264_nvenc and hevc_nvencencoders. In order to enable it in FFmpeg you need:

  • A supported GPU
  • Supported drivers
  • ffmpeg configured without --disable-nvenc

Visit ​NVIDIA Video Codec SDK to download the SDK and to read more about the supported GPUs and supported drivers.

Usage example:

ffmpeg -i input -c:v h264_nvenc -profile high444p -pixel_format yuv444p -preset default output.mp4

You can see available presets, other options, and encoder info with ffmpeg -h encoder=h264_nvenc or ffmpeg -h encoder=hevc_nvenc.

Note: If you get the No NVENC capable devices found error make sure you're encoding to a supported pixel format. See encoder info as shown above.

CUDA/CUVID/NvDecode

CUVID, which is also called nvdec by Nvidia now, can be used for decoding on Windows and Linux. In combination with nvenc it offers full hardware transcoding.

CUVID offers decoders for H264, HEVC, MJPEG, mpeg1/2/4, vp8/9, vc1. Codec support varies by hardware. The full set of codecs being available only on Pascal hardware, which adds VP9 and 10 bit support.

While decoding 10 bit video is supported, it is not possible to do full hardware transcoding currently (See the partial hardware example below).

Sample decode using CUVID, the cuvid decoder copies the frames to system memory in this case:

ffmpeg -c:v h264_cuvid -i input output.mkv

Full hardware transcode with CUVID and NVENC:

ffmpeg -hwaccel cuvid -c:v h264_cuvid -i input -c:v h264_nvenc -preset slow output.mkv

Partial hardware transcode, with frames passed through system memory (This is necessary for transcoding 10bit content):

ffmpeg -c:v h264_cuvid -i input -c:v h264_nvenc -preset slow output.mkv

If ffmpeg was compiled with support for libnpp, it can be used to insert a GPU based scaler into the chain:

ffmpeg -hwaccel_device 0 -hwaccel cuvid -c:v h264_cuvid -i input -vf scale_npp=-1:720 -c:v h264_nvenc -preset slow output.mkv

The -hwaccel_device option can be used to specify the GPU to be used by the cuvid hwaccel in ffmpeg.

Intel QSV

Intel QSV (Quick Sync Video) is a technology which allows decoding and encoding using recent Intel CPU and integrated GPU, supported on recent Intel CPUs. Note that the (CPU)GPU needs to be compatible with both QSV and OpenCL. Some (older) QSV -enabled GPUs aren't compatible with OpenCL. See: ​http://www.intel.com/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html ​https://software.intel.com/en-us/articles/intel-sdk-for-opencl-applications-2013-release-notes

To enable QSV support, you need the Intel Media SDK integrated in the Intel Media Server Studio: ​https://software.intel.com/en-us/intel-media-server-studio

The Intel Media Server studio is available for both Linux and Windows, and contains the libva and libdrm libraries, the libmfx dispatcher library and the intel drivers. libmfx is the library which selects the codec depending on the system capabilities, falling back to a software implementation if the hardware accelerated codec is not available).

FFmpeg QSV support relies on libmfx, but the library provided by Intel does not come with pkg-config files and a proper installer. Thus the easiest to install the library is to use the libmfx version packaged by lu_zero here: ​https://github.com/lu-zero/mfx_dispatch

Requirements on Windows: install the Intel Media SDK packaged in the Intel Media Server Studio, which comes with a graphic installer, and a MinGW compilation enviroment (for example provided by MSYS2 with a corresponding Mingw-w64 package). Then you need to build libmfx and install it in a path recognized by pkg-config. For example if you install in /usr/local then you need the update the $PKG_CONFIG_PATH environment variable to make it point to /usr/local/lib/pkgconfig.

Requriments on Linux: you need either to rely on the Intel Media Server Studio for Linux, or use a recent enough supported system, with the libva and libdrm libraries, the libva Intel drivers, and the libmfx library packaged by lu_zero. Note: in case you use the Intel Media Server Studio generic installation script, the installation script may overwrite your system libraries and break the system.

Check the following website for updated information about the Intel Graphics stack on the various Linux platforms: ​https://01.org/linuxgraphics

To enable QSV support in the FFmpeg build, configure with --enable-libmfx.

Support for decoding and encoding is integrated in FFmpeg through several codecs identified by the _qsv suffix. In particular, it currently supports MPEG2 video, VC1 (decoding only), H.264 and H.265.

For example to encode to H.264 using h264_qsv, you can use the command:

ffmpeg -i INPUT -c:v h264_qsv -preset:v faster out.qsv.mp4

If you have a Kaby Lake CPU, you can encode with HEVC using hevc_qsv:

ffmpeg -i INPUT -c:v hevc_qsv -load_plugin hevc_hw -preset:v faster out.qsv.mp4

OpenCL

Official website:

​https://www.khronos.org/opencl/

Currently only used in filtering (deshake and unsharp filters). In order to use OpenCL code you need to enable the build with --enable-opencl. An API to use OpenCL API from FFmpeg is provided in libavutil/opencl.h. No decoding/encoding is currently supported (yet).

For enable-opencl to work you need to basically install your local graphics cards drivers, as well as SDK, then use its .lib files and headers.

AMD VCE

AMD VCE is exposed through VA-API on linux. For windows there have been port ​attempts but nothing official yet.

External resources

  • ​http://multimedia.cx/eggs/mac-hwaccel-video/
  • ​http://thread.gmane.org/gmane.comp.video.ffmpeg.libav.user/11691
  • ​http://stackoverflow.com/questions/23289157/how-to-use-hardware-acceleration-with-ffmpeg
  • ​https://gitorious.org/hwdecode-demos/

http://trac.ffmpeg.org/wiki/HWAccelIntro


推荐阅读
  • 关于我们EMQ是一家全球领先的开源物联网基础设施软件供应商,服务新产业周期的IoT&5G、边缘计算与云计算市场,交付全球领先的开源物联网消息服务器和流处理数据 ... [详细]
  • 解决nginx启动报错epoll_wait() reported that client prematurely closed connection的方法
    本文介绍了解决nginx启动报错epoll_wait() reported that client prematurely closed connection的方法,包括检查location配置是否正确、pass_proxy是否需要加“/”等。同时,还介绍了修改nginx的error.log日志级别为debug,以便查看详细日志信息。 ... [详细]
  • 本文讨论了如何使用GStreamer来删除H264格式视频文件中的中间部分,而不需要进行重编码。作者提出了使用gst_element_seek(...)函数来实现这个目标的思路,并提到遇到了一个解决不了的BUG。文章还列举了8个解决方案,希望能够得到更好的思路。 ... [详细]
  • 本文介绍了DataTables插件的官方网站以及其基本特点和使用方法,包括分页处理、数据过滤、数据排序、数据类型检测、列宽度自动适应、CSS定制样式、隐藏列等功能。同时还介绍了其易用性、可扩展性和灵活性,以及国际化和动态创建表格的功能。此外,还提供了参数初始化和延迟加载的示例代码。 ... [详细]
  • 本文介绍了关于Java异常的八大常见问题,包括异常管理的最佳做法、在try块中定义的变量不能用于catch或finally的原因以及为什么Double.parseDouble(null)和Integer.parseInt(null)会抛出不同的异常。同时指出这些问题是由于不同的开发人员开发所导致的,不值得过多思考。 ... [详细]
  • java实现rstp格式转换使用ffmpeg实现linux命令第一步安装node.js和ffmpeg第二步搭建node.js启动websocket接收服务
    java实现rstp格式转换使用ffmpeg实现linux命令第一步安装node.js和ffmpeg第二步搭建node.js启动websocket接收服务第三步java实现 ... [详细]
  • 程序员如何选择机械键盘轴体?红轴和茶轴对比
    本文介绍了程序员如何选择机械键盘轴体,特别是红轴和茶轴的对比。同时还介绍了U盘安装Linux镜像的步骤,以及在Linux系统中安装软件的命令行操作。此外,还介绍了nodejs和npm的安装方法,以及在VSCode中安装和配置常用插件的方法。最后,还介绍了如何在GitHub上配置SSH密钥和git的基本配置。 ... [详细]
  • Question该提问来源于开源项目:react-native-device-info/react-native-device-info ... [详细]
  • 概述H.323是由ITU制定的通信控制协议,用于在分组交换网中提供多媒体业务。呼叫控制是其中的重要组成部分,它可用来建立点到点的媒体会话和多点间媒体会议 ... [详细]
  • 本文讨论了在Spring 3.1中,数据源未能自动连接到@Configuration类的错误原因,并提供了解决方法。作者发现了错误的原因,并在代码中手动定义了PersistenceAnnotationBeanPostProcessor。作者删除了该定义后,问题得到解决。此外,作者还指出了默认的PersistenceAnnotationBeanPostProcessor的注册方式,并提供了自定义该bean定义的方法。 ... [详细]
  • C++字符字符串处理及字符集编码方案
    本文介绍了C++中字符字符串处理的问题,并详细解释了字符集编码方案,包括UNICODE、Windows apps采用的UTF-16编码、ASCII、SBCS和DBCS编码方案。同时说明了ANSI C标准和Windows中的字符/字符串数据类型实现。文章还提到了在编译时需要定义UNICODE宏以支持unicode编码,否则将使用windows code page编译。最后,给出了相关的头文件和数据类型定义。 ... [详细]
  • 本文讨论了在openwrt-17.01版本中,mt7628设备上初始化启动时eth0的mac地址总是随机生成的问题。每次随机生成的eth0的mac地址都会写到/sys/class/net/eth0/address目录下,而openwrt-17.01原版的SDK会根据随机生成的eth0的mac地址再生成eth0.1、eth0.2等,生成后的mac地址会保存在/etc/config/network下。 ... [详细]
  • 深入理解Kafka服务端请求队列中请求的处理
    本文深入分析了Kafka服务端请求队列中请求的处理过程,详细介绍了请求的封装和放入请求队列的过程,以及处理请求的线程池的创建和容量设置。通过场景分析、图示说明和源码分析,帮助读者更好地理解Kafka服务端的工作原理。 ... [详细]
  • RouterOS 5.16软路由安装图解教程
    本文介绍了如何安装RouterOS 5.16软路由系统,包括系统要求、安装步骤和登录方式。同时提供了详细的图解教程,方便读者进行操作。 ... [详细]
  • 嵌入式处理器的架构与内核发展历程
    本文主要介绍了嵌入式处理器的架构与内核发展历程,包括不同架构的指令集的变化,以及内核的流水线和结构。通过对ARM架构的分析,可以更好地理解嵌入式处理器的架构与内核的关系。 ... [详细]
author-avatar
phba
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有