The advantage of binding between Surface and Context.
by Hirokazu Honda
Hi,
According to libva document
(http://01org.github.io/libva_staging_doxygen/group__api__core.html#ga4af3...),
Surfaces are bound to a context if passing them as an argument when
creating the context.
Seeing intel-vaapi-driver code, the surfaces are just stored in
object_context.render_targets.
A surface processed by
vaBeginPicture()-vaRenderPicture()-vaEndPicture() are specified in
vaBeginPicture(). (object_context.current_render_target)
It looks like a surface can be processed using a context by being
specified in vaBeginPicture(), even if it is not bound to the context.
Here, my questions are below.
What is the advantage of binding?
In what circumstances do we need to associate the context with surfaces?
In which scenarios passing surfaces to vaCreateContext is required,
and in which it is not?
Best Regards,
Hirokazu Honda
3 years
[ANNOUNCE] libva/libva-utils/intel-vaapi-driver 2.0.0.pre1
by Xiang, Haihao
Hi all,
libva/libva-utils/intel-vaapi-driver 2.0.0.pre1 are planned for release within the coming weeks. Please report any critical issues in the following test packages:
libva:
------
https://github.com/01org/libva/releases/tag/2.0.0.pre1
tarball: https://github.com/01org/libva/releases/download/2.0.0.pre1/libva-2.0.0.p...
* Bump VA-API version to 1.0.0 and libva to 2.0.0
* Add new API for H264 FEI support
* Add definition of VA_FOURCC_I420
* Add functions for converting common enums to strings
* Deprecate H.264 baseline profile and FMO support
* Deprecate packed misc packed header flag
* Delete libva-tpi and libva-egl backends
* Refine VASliceParameterBufferHEVC, VAEncMiscParameterBuffer
* Fix errors in VAConfigAttribValEncROI, VAEncMacroblockParameterBufferH264
* Fix race condition in wayland support
* Rename vaMessageCallback to VAMessageCallback
* Make logging callbacks library-safe
Note: libva-2.0.0 is not compatible with the old version of libva, but for most user, what you need to do is to rebuild your application against libva 2.0.0 pre1
libva-utils:
------------
https://github.com/01org/libva-utils/releases/tag/2.0.0.pre1
tarball: https://github.com/01org/libva-utils/releases/download/2.0.0.pre...
* Bump version to 2.0.0
* Add option '--device ' to vainfo
* Add vp9enc for VP9 encoding
* Add vavpp for video processing
* Add FEI gtest cases
* Fix segmentation fault in putsurface_wayland
* Fix GCC 7.1.1 warnings/errors
* Fix libva version printed out by vainfo
intel-vaapi-driver:
---------------------
https://github.com/01org/intel-vaapi-driver/releases/tag/2.0.0.pre1
tarball: https://github.com/01org/intel-vaapi-driver/releases/download/2.0.0.pre1/...
* Bump version to 2.0.0
* Add support for Coffee Lake (aka. CFL)
- Decoding: H.264/MPEG-2/VC-1/JPEG/VP8/HEVC/HEVC 10-bit/VP9/VP9 10-bit
- Encoding: H.264/MPEG-2/JPEG/VP8/VP9/HEVC/HEVC 10-bit/AVC low power CQP/CBR/VBR mode
- VPP: CSC/scaling/NoiseReduction/Deinterlacing{Bob, MotionAdaptive, MotionCompensated}/ColorBalance/STD
* Add support for H264 FEI
* Add support for HEVC ROI encoding
* Add support for intensity compensation for VC-1 decoding
* Improve the quality of the H264 encoder on BDW/BSW
* Improve the CSC performance between I420/NV12/P010/YUY2/VYUY format
* Improve the performace of va{Get, Put}Image for I420/NV12/P010/YUY2/VYUY format
* Fix image corruption for VP9 decoding
* Fix race condition in wayland support
* Fix ROI support in VDEnc support
* Fix corrupted stream when using VDEnc CBR/VBR
* Fix GCC 7.1.1 warnings/errors
* Update the shader for HEVC encoding
Thanks
Haihao
4 years, 8 months
Direct encoding of RGBA surface formats
by Matt Fischer
This is a question about gstreamer-vaapi as well as libva-intel-driver. I
hope this is the right list for those components...if not, please let me
know where I ought to ask it instead.
I have a situation where I'd like to encode video using the gstreamer-vaapi
plugins that's in plain RGBA format (it's coming out of an OpenGL rendering
pipeline). As it stands now, though, the encoder plugins refuse to set up
a pipeline for any format other than YUV 4:2:0/4:2:2. This is enforced
both by an explicit check in gstvaapiencoder.c (the
is_chroma_type_supported() function), as well as by querying the
VAConfigAttribRTFormat attribute from the underlying vaDisplay.
Just for curiosity's sake, I tried adding the RGB32 format both into the
gstreamer check, as well as into the list of allowed chroma formats down in
libva-intel-driver (in i965_drv_video.c in the
i965_get_default_chroma_formats()
function). To my surprise, this seemed to work correctly. The encoded
video played back with correct colors, suggesting that the underlying i965
encoding hardware is capable of accepting RGB formats and doing the
appropriate color conversions.
If this is true, is there any chance of adding official support for RGB
formats in the way that my hack did? It's quite useful to be able to
connect OpenGL rendering into the VAAPI pipeline, and the addition of RGBA
support appears to make that possible. Are there limitations that would
make this difficult, or is the lack of RGBA support thus far just an
oversight?
Thanks,
Matt
4 years, 9 months
Merge v2.0-branch to master
by Xiang, Haihao
Hi all,
We are planning to merge v2.0-branch (libva 2.0) to master branch within the
coming weeks, Thanks to everybody who contributed to v2.0-branch. There are the
following major changes on v2.0-branch:
* Bumped the major VA-API version to 1.0.0.
* Removes the stale interface for libva-egl and libva-tpi, instead user should
use the buffer sharing API to share buffer with other libraries.
* Deprecate H.264 baseline profile support
* Deprecate packed misc packed header support
* Refine VASliceParameterBufferHEVC
* Refine VAEncMacroblockParameterBufferH264
* Refine VAConfigAttribEncSliceStructure
* Add the fourcc of I420 format
* Make logging callbacks library-safe
* Update lots of comments
v2.0-branch works with FFmpeg, gstreamer-vaapi and libyami master without
regression.
Please let me know if you have any concern or question, or you may provide your
comment at https://github.com/01org/libva/issues/72
Thanks
Haihao
4 years, 9 months
VP8 hardware accelaration using i965 intel driver via Chromium WebRTC
by tsoumplekas_giorgos
Dear all,
We are working on a Beebox board that runs a Debian 9 OS. We try to encode
a VP8 stream in Chromium during a webrtc video call.
But the quality is not as good as the stream that is created by software
encoder.
Specifically for a SD video stream and bit rate at 300 Kbs and frame rate
at 30 fps ,you can see waves in a frame. The problem is increased when the
motion between frames is increazed.
We increase the bitrate at 2Mbs , it seems that the quality has a rapidly
improvment. Problem appears again when the motion is high in the video
frame. Increasing the bitrate at 4Mbs the image is perfect but there is a
little lag when the motion is high.
We have tried different combination of settings but we cannot find a
perfect one.
Could you please give us a set of ideal of settings or some advice to have
real time encoding like software encoding?
Find below our settings
What is wrong or not optimal?
//Sequence Parameters
VAEncSequenceParameterBufferVP8 seq_param; memset(&seq_param, 0,
sizeof(seq_param)); seq_param.frame_width = 640 seq_param.frame_height =
480 seq_param.frame_width_scale = 1; seq_param.frame_height_scale = 1;
seq_param.intra_period = 256; seq_param.bits_per_second = 2Mb;
seq_param.error_resilient = 0; seq_param.kf_auto = 0; seq_param.kf_min_dist
= 0; seq_param.kf_max_dist = seq_param.intra_period; //in reference frame
save the 4 latest decoded frames seq_param.reference_frames[i] =
(*iter)->id();
//Picture Parameters
VAEncPictureParameterBufferVP8 pic_param;
//Intra frame
pic_param.pic_flags.bits.refresh_entropy_probs = 0;
pic_param.pic_flags.bits.frame_type = 0; pic_param.ref_flags.bits.force_kf
= 1; pic_param.pic_flags.bits.refresh_last = 1;
pic_param.pic_flags.bits.refresh_golden_frame = 1;
pic_param.pic_flags.bits.refresh_alternate_frame = 1;
pic_param.pic_flags.bits.copy_buffer_to_alternate = 0;
pic_param.pic_flags.bits.copy_buffer_to_golden = 0;
pic_param.ref_last_frame = pic_param.ref_gf_frame = pic_param.ref_arf_frame
= VA_INVALID_SURFACE; //P frame pic_param.pic_flags.bits.frame_type = 1;
pic_param.ref_flags.bits.force_kf = 0;
pic_param.pic_flags.bits.refresh_last = 1;
pic_param.pic_flags.bits.refresh_golden_frame = 0;
pic_param.pic_flags.bits.refresh_alternate_frame = 0;
pic_param.pic_flags.bits.copy_buffer_to_alternate = 2;
pic_param.pic_flags.bits.copy_buffer_to_golden = 1;
pic_param.ref_last_frame = pic_param.ref_gf_frame = pic_param.ref_arf_frame
= //Latest decoded frame //Common setting
pic_param.pic_flags.bits.show_frame = 1; pic_param.clamp_qindex_low = 0;
pic_param.clamp_qindex_high = 127; pic_param.pic_flags.bits.version = 1;
pic_param.pic_flags.bits.loop_filter_type = 1; pic_param.sharpness_level =
0; for (int i = 0; i < 4; i++) { pic_param.loop_filter_level[i] = 16; }
//quantization parameters
For I frame
quant.quantization_index[i] = 4;
For B frame
quant.quantization_index[i] = 26;
Misc Parameters
rate_control_param.bits_per_second = 2Mbs;
rate_control_param.target_percentage = 90;
rate_control_param.window_size = 1500;
rate_control_param.initial_qp = 26;
rate_control_param.rc_flags.bits.disable_frame_skip = true;
framerate_param.framerate = 30;
hrd_param.buffer_size = cpb_size_ = bitrate_ * kCPBWindowSizeMs / 1000;
hrd_param.initial_buffer_fullness = cpb_size_ / 2;
Georgios (George) Tsoumplekas
Software Engineer @ Unify S.A
M.Sc. Degree in Computer Systems Technology, Kapodistrian University of
Athens
5-year Diploma in Computer Engineering, University of Thessaly
email: getsoubl(a)gmail.com
4 years, 9 months