mirror of
https://repo.dec05eba.com/gpu-screen-recorder
synced 2026-04-17 23:46:20 +09:00
Compare commits
6 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2f4a906b78 | ||
|
|
3e6bc0224a | ||
|
|
286158a838 | ||
|
|
8b953e95d8 | ||
|
|
4ed04830c1 | ||
|
|
755340454d |
13
README.md
13
README.md
@@ -48,6 +48,7 @@ Here are some known unofficial packages:
|
|||||||
* Solus: [gpu-screen-recorder](https://github.com/getsolus/packages/tree/main/packages/g/gpu-screen-recorder)
|
* Solus: [gpu-screen-recorder](https://github.com/getsolus/packages/tree/main/packages/g/gpu-screen-recorder)
|
||||||
* Nobara: [Nobara wiki](https://wiki.nobaraproject.org/en/general-usage/additional-software/GPU-Screen-Recorder)
|
* Nobara: [Nobara wiki](https://wiki.nobaraproject.org/en/general-usage/additional-software/GPU-Screen-Recorder)
|
||||||
* AppImage [AppImage GitHub releases](https://github.com/pkgforge-dev/gpu-screen-recorder-AppImage/releases)
|
* AppImage [AppImage GitHub releases](https://github.com/pkgforge-dev/gpu-screen-recorder-AppImage/releases)
|
||||||
|
* Void Linux: [gpu-screen-recorder](https://voidlinux.org/packages/?arch=x86_64&q=gpu-screen-recorder) (Make sure to read the README in the package)
|
||||||
|
|
||||||
# Dependencies
|
# Dependencies
|
||||||
GPU Screen Recorder uses meson build system so you need to install `meson` to build GPU Screen Recorder.
|
GPU Screen Recorder uses meson build system so you need to install `meson` to build GPU Screen Recorder.
|
||||||
@@ -91,12 +92,12 @@ There are also additional dependencies needed at runtime depending on your GPU v
|
|||||||
* xnvctrl (libxnvctrl0, when using the `-oc` option)
|
* xnvctrl (libxnvctrl0, when using the `-oc` option)
|
||||||
|
|
||||||
# How to use
|
# How to use
|
||||||
Run `gpu-screen-recorder --help` to see all options and also examples.\
|
Run `gpu-screen-recorder --help` to see all options and run `man gpu-screen-recorder` to see more detailed explanations for the options and also examples.\
|
||||||
There is also a gui for the gpu screen recorder called [GPU Screen Recorder GTK](https://git.dec05eba.com/gpu-screen-recorder-gtk/).\
|
There is also a gui for the gpu screen recorder called [GPU Screen Recorder GTK](https://git.dec05eba.com/gpu-screen-recorder-gtk/).\
|
||||||
There is also a new alternative UI for GPU Screen Recorder in the style of ShadowPlay called [GPU Screen Recorder UI](https://git.dec05eba.com/gpu-screen-recorder-ui/).
|
There is also a new alternative UI for GPU Screen Recorder in the style of ShadowPlay called [GPU Screen Recorder UI](https://git.dec05eba.com/gpu-screen-recorder-ui/).
|
||||||
## Recording
|
## Recording
|
||||||
Here is an example of how to record your monitor and the default audio output: `gpu-screen-recorder -w screen -f 60 -a default_output -o ~/Videos/test_video.mp4`.
|
Here is an example of how to record your monitor and the default audio output: `gpu-screen-recorder -w screen -f 60 -a default_output -o ~/Videos/test_video.mp4`.
|
||||||
Yyou can stop and save the recording with `Ctrl+C` or by running `pkill -SIGINT -f gpu-screen-recorder`.
|
Yyou can stop and save the recording with `Ctrl+C` or by running `pkill -SIGINT -f "^gpu-screen-recorder"`.
|
||||||
You can see a list of capture options to record if you run `gpu-screen-recorder --list-capture-options`. This will list possible capture options and monitor names, for example:\
|
You can see a list of capture options to record if you run `gpu-screen-recorder --list-capture-options`. This will list possible capture options and monitor names, for example:\
|
||||||
```
|
```
|
||||||
window
|
window
|
||||||
@@ -122,12 +123,12 @@ GPU Screen Recorder uses Ffmpeg so GPU Screen Recorder supports all protocols th
|
|||||||
If you want to reduce latency one thing you can do is to use the `-keyint` option, for example `-keyint 0.5`. Lower value means lower latency at the cost of increased bitrate/decreased quality.
|
If you want to reduce latency one thing you can do is to use the `-keyint` option, for example `-keyint 0.5`. Lower value means lower latency at the cost of increased bitrate/decreased quality.
|
||||||
## Recording while using replay/streaming
|
## Recording while using replay/streaming
|
||||||
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording (for example: `gpu-screen-recorder -w screen -c mp4 -r 60 -o "$HOME/Videos/replays" -ro "$HOME/Videos/recordings"`).\
|
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording (for example: `gpu-screen-recorder -w screen -c mp4 -r 60 -o "$HOME/Videos/replays" -ro "$HOME/Videos/recordings"`).\
|
||||||
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f gpu-screen-recorder`. The path to the video will be displayed in stdout when saving the video.\
|
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f "^gpu-screen-recorder"`. The path to the video will be displayed in stdout when saving the video.\
|
||||||
This way of recording while using replay/streaming is more efficient than running GPU Screen Recorder multiple times since this way it only records the screen and encodes the video once.
|
This way of recording while using replay/streaming is more efficient than running GPU Screen Recorder multiple times since this way it only records the screen and encodes the video once.
|
||||||
## Controlling GPU Screen Recorder remotely
|
## Controlling GPU Screen Recorder remotely
|
||||||
To save a video in replay mode, you need to send signal SIGUSR1 to gpu screen recorder. You can do this by running `pkill -SIGUSR1 -f gpu-screen-recorder`.\
|
To save a video in replay mode, you need to send signal SIGUSR1 to gpu screen recorder. You can do this by running `pkill -SIGUSR1 -f "^gpu-screen-recorder"`.\
|
||||||
To stop recording send SIGINT to gpu screen recorder. You can do this by running `pkill -SIGINT -f gpu-screen-recorder` or pressing `Ctrl-C` in the terminal that runs gpu screen recorder. When recording a regular non-replay video this will also save the video.\
|
To stop recording send SIGINT to gpu screen recorder. You can do this by running `pkill -SIGINT -f "^gpu-screen-recorder"` or pressing `Ctrl-C` in the terminal that runs gpu screen recorder. When recording a regular non-replay video this will also save the video.\
|
||||||
To pause/unpause recording send SIGUSR2 to gpu screen recorder. You can do this by running `pkill -SIGUSR2 -f gpu-screen-recorder`. This is only applicable and useful when recording (not streaming nor replay).\
|
To pause/unpause recording send SIGUSR2 to gpu screen recorder. You can do this by running `pkill -SIGUSR2 -f "^gpu-screen-recorder"`. This is only applicable and useful when recording (not streaming nor replay).\
|
||||||
There are more signals to control GPU Screen Recorder. Run `gpu-screen-recorder --help` to list them all (under `NOTES` section).
|
There are more signals to control GPU Screen Recorder. Run `gpu-screen-recorder --help` to list them all (under `NOTES` section).
|
||||||
## Simple way to run replay without gui
|
## Simple way to run replay without gui
|
||||||
Run the script `scripts/start-replay.sh` to start replay and then `scripts/save-replay.sh` to save a replay and `scripts/stop-replay.sh` to stop the replay. The videos are saved to `$HOME/Videos`.
|
Run the script `scripts/start-replay.sh` to start replay and then `scripts/save-replay.sh` to save a replay and `scripts/stop-replay.sh` to stop the replay. The videos are saved to `$HOME/Videos`.
|
||||||
|
|||||||
8
TODO
8
TODO
@@ -405,3 +405,11 @@ Use GL_RGBA16F or GL_RGBA32F for hdr, that allows color values to be outside the
|
|||||||
https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_surface_SMPTE2086_metadata.txt
|
https://registry.khronos.org/EGL/extensions/EXT/EGL_EXT_surface_SMPTE2086_metadata.txt
|
||||||
|
|
||||||
Doesn't work: sibs run --args -w "/dev/video0;camera_width=800;camera_height=600;pixfmt=yuyv" -fm content -o video.mp4
|
Doesn't work: sibs run --args -w "/dev/video0;camera_width=800;camera_height=600;pixfmt=yuyv" -fm content -o video.mp4
|
||||||
|
|
||||||
|
Delay adding audio data until 1 frames time has passed.
|
||||||
|
|
||||||
|
Allow av1 in the flatpak. Do that by patching ffmpeg to support multiple nvenc codepaths by using header versions, the same way gsr does in the nvenc query.
|
||||||
|
|
||||||
|
Increase qp on av1 non-nvidia, decrease qp on h264 nvidia. For vulkan.
|
||||||
|
|
||||||
|
Check if the vulkan codec query works on nvidia x11.
|
||||||
|
|||||||
@@ -274,10 +274,17 @@ Audio bitrate in kbps (default: 128 for opus/flac, 160 for aac). 0 = automatic.
|
|||||||
.TP
|
.TP
|
||||||
.BI \-k " codec"
|
.BI \-k " codec"
|
||||||
Video codec:
|
Video codec:
|
||||||
.BR auto ", " h264 ", " hevc ", " av1 ", " vp8 ", " vp9 ", " hevc_hdr ", " av1_hdr ", " hevc_10bit ", " av1_10bit
|
.BR auto ", " h264 ", " hevc ", " av1 ", " vp8 ", " vp9 ", " hevc_hdr ", " av1_hdr ", " hevc_10bit ", " av1_10bit ", " h264_vulkan ", " hevc_vulkan ", " hevc_10bit_vulkan ", " av1_vulkan ", " av1_hdr_vulkan ", " av1_10bit_vulkan
|
||||||
(default: auto → h264). HDR options not available on X11 or portal capture.
|
|
||||||
|
|
||||||
10-bit capture reduces banding but may not be supported properly by all video players.
|
HDR options not available on X11 or portal capture. 10-bit capture reduces banding but may not be supported properly by all video players.
|
||||||
|
.br
|
||||||
|
Vulkan codec options are experimental. They may not work properly on your system because of GPU driver issues.
|
||||||
|
.br
|
||||||
|
Using vulkan codecs may result in better gaming performance, especially on NVIDIA as it doesn't suffer from an issue known as "cuda p2 state"
|
||||||
|
.br
|
||||||
|
where the GPU gets downclocked when using nvenc (regular video codecs on NVIDIA).
|
||||||
|
|
||||||
|
(default: auto → h264).
|
||||||
.TP
|
.TP
|
||||||
.BI \-q " quality"
|
.BI \-q " quality"
|
||||||
Quality preset (medium, high, very_high, ultra) for QP/VBR mode, or bitrate (kbps) for CBR mode (default: very_high).
|
Quality preset (medium, high, very_high, ultra) for QP/VBR mode, or bitrate (kbps) for CBR mode (default: very_high).
|
||||||
@@ -450,7 +457,7 @@ Save last 30 minutes (replay mode).
|
|||||||
Use
|
Use
|
||||||
.B pkill
|
.B pkill
|
||||||
to send signals (e.g.,
|
to send signals (e.g.,
|
||||||
.BR "pkill -SIGUSR1 -f gpu-screen-recorder" ).
|
.BR "pkill -SIGUSR1 -f ""^gpu-screen-recorder""" ).
|
||||||
.SH EXAMPLES
|
.SH EXAMPLES
|
||||||
Record monitor at 60 FPS with desktop audio:
|
Record monitor at 60 FPS with desktop audio:
|
||||||
.PP
|
.PP
|
||||||
@@ -581,7 +588,7 @@ Close other screen recorders (including idle OBS)
|
|||||||
.IP \(bu 3
|
.IP \(bu 3
|
||||||
NVIDIA: CUDA breaks after suspend (install gsr-nvidia.conf fix)
|
NVIDIA: CUDA breaks after suspend (install gsr-nvidia.conf fix)
|
||||||
.IP \(bu 3
|
.IP \(bu 3
|
||||||
AMD: Possible black bars colors with HEVC/AV1 (use H264 or FFmpeg >=8)
|
AMD: Possible black bars in output video with HEVC/AV1 (use H264 or FFmpeg >=8)
|
||||||
.SH SEE ALSO
|
.SH SEE ALSO
|
||||||
.UR https://git.dec05eba.com/gpu-screen-recorder
|
.UR https://git.dec05eba.com/gpu-screen-recorder
|
||||||
Project homepage
|
Project homepage
|
||||||
|
|||||||
@@ -3,6 +3,6 @@
|
|||||||
|
|
||||||
#include "codec_query.h"
|
#include "codec_query.h"
|
||||||
|
|
||||||
bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_codecs, const char *card_path, bool cleanup);
|
bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_codecs, const char *card_path, int *device_index_ret, bool cleanup);
|
||||||
|
|
||||||
#endif /* GSR_CODEC_QUERY_VULKAN_H */
|
#endif /* GSR_CODEC_QUERY_VULKAN_H */
|
||||||
|
|||||||
@@ -51,6 +51,11 @@ typedef enum {
|
|||||||
GSR_VIDEO_CODEC_VP9,
|
GSR_VIDEO_CODEC_VP9,
|
||||||
GSR_VIDEO_CODEC_H264_VULKAN,
|
GSR_VIDEO_CODEC_H264_VULKAN,
|
||||||
GSR_VIDEO_CODEC_HEVC_VULKAN,
|
GSR_VIDEO_CODEC_HEVC_VULKAN,
|
||||||
|
GSR_VIDEO_CODEC_HEVC_HDR_VULKAN,
|
||||||
|
GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN,
|
||||||
|
GSR_VIDEO_CODEC_AV1_VULKAN,
|
||||||
|
GSR_VIDEO_CODEC_AV1_HDR_VULKAN,
|
||||||
|
GSR_VIDEO_CODEC_AV1_10BIT_VULKAN,
|
||||||
} gsr_video_codec;
|
} gsr_video_codec;
|
||||||
|
|
||||||
typedef enum {
|
typedef enum {
|
||||||
|
|||||||
@@ -169,6 +169,9 @@ typedef void (*FUNC_glTexStorageMem2DEXT)(unsigned int target, int levels, unsig
|
|||||||
typedef void (*FUNC_glBufferStorageMemEXT)(unsigned int target, ssize_t size, unsigned int memory, uint64_t offset);
|
typedef void (*FUNC_glBufferStorageMemEXT)(unsigned int target, ssize_t size, unsigned int memory, uint64_t offset);
|
||||||
typedef void (*FUNC_glNamedBufferStorageMemEXT)(unsigned int buffer, ssize_t size, unsigned int memory, uint64_t offset);
|
typedef void (*FUNC_glNamedBufferStorageMemEXT)(unsigned int buffer, ssize_t size, unsigned int memory, uint64_t offset);
|
||||||
typedef void (*FUNC_glMemoryObjectParameterivEXT)(unsigned int memoryObject, unsigned int pname, const int *params);
|
typedef void (*FUNC_glMemoryObjectParameterivEXT)(unsigned int memoryObject, unsigned int pname, const int *params);
|
||||||
|
typedef void (*FUNC_glGenSemaphoresEXT)(int n, unsigned int *semaphores);
|
||||||
|
typedef void (*FUNC_glImportSemaphoreFdEXT)(unsigned int semaphore, unsigned int handleType, int fd);
|
||||||
|
typedef void (*FUNC_glSignalSemaphoreEXT)(unsigned int semaphore, unsigned int numBufferBarriers, const unsigned int *buffers, unsigned int numTextureBarriers, const unsigned int *textures, const unsigned int *dstLayouts);
|
||||||
|
|
||||||
typedef enum {
|
typedef enum {
|
||||||
GSR_GL_CONTEXT_TYPE_EGL,
|
GSR_GL_CONTEXT_TYPE_EGL,
|
||||||
@@ -195,6 +198,7 @@ struct gsr_egl {
|
|||||||
gsr_gpu_info gpu_info;
|
gsr_gpu_info gpu_info;
|
||||||
|
|
||||||
char card_path[128];
|
char card_path[128];
|
||||||
|
int vulkan_device_index;
|
||||||
|
|
||||||
int32_t (*eglGetError)(void);
|
int32_t (*eglGetError)(void);
|
||||||
EGLDisplay (*eglGetDisplay)(EGLNativeDisplayType display_id);
|
EGLDisplay (*eglGetDisplay)(EGLNativeDisplayType display_id);
|
||||||
@@ -226,6 +230,9 @@ struct gsr_egl {
|
|||||||
FUNC_glBufferStorageMemEXT glBufferStorageMemEXT;
|
FUNC_glBufferStorageMemEXT glBufferStorageMemEXT;
|
||||||
FUNC_glNamedBufferStorageMemEXT glNamedBufferStorageMemEXT;
|
FUNC_glNamedBufferStorageMemEXT glNamedBufferStorageMemEXT;
|
||||||
FUNC_glMemoryObjectParameterivEXT glMemoryObjectParameterivEXT;
|
FUNC_glMemoryObjectParameterivEXT glMemoryObjectParameterivEXT;
|
||||||
|
FUNC_glGenSemaphoresEXT glGenSemaphoresEXT;
|
||||||
|
FUNC_glImportSemaphoreFdEXT glImportSemaphoreFdEXT;
|
||||||
|
FUNC_glSignalSemaphoreEXT glSignalSemaphoreEXT;
|
||||||
|
|
||||||
__GLXextFuncPtr (*glXGetProcAddress)(const unsigned char *procName);
|
__GLXextFuncPtr (*glXGetProcAddress)(const unsigned char *procName);
|
||||||
GLXFBConfig* (*glXChooseFBConfig)(Display *dpy, int screen, const int *attribList, int *nitems);
|
GLXFBConfig* (*glXChooseFBConfig)(Display *dpy, int screen, const int *attribList, int *nitems);
|
||||||
|
|||||||
@@ -378,6 +378,7 @@ int gsr_kms_client_init(gsr_kms_client *self, const char *card_path) {
|
|||||||
goto err;
|
goto err;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
poll_fd.revents = 0;
|
||||||
}
|
}
|
||||||
fprintf(stderr, "gsr info: gsr_kms_client_init: server connected\n");
|
fprintf(stderr, "gsr info: gsr_kms_client_init: server connected\n");
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.12.5', default_options : ['warning_level=2'])
|
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.13.0', default_options : ['warning_level=2'])
|
||||||
|
|
||||||
add_project_arguments('-Wshadow', language : ['c', 'cpp'])
|
add_project_arguments('-Wshadow', language : ['c', 'cpp'])
|
||||||
if get_option('buildtype') == 'debug'
|
if get_option('buildtype') == 'debug'
|
||||||
@@ -74,6 +74,7 @@ dep = [
|
|||||||
dependency('libdrm'),
|
dependency('libdrm'),
|
||||||
dependency('wayland-egl'),
|
dependency('wayland-egl'),
|
||||||
dependency('wayland-client'),
|
dependency('wayland-client'),
|
||||||
|
dependency('vulkan'),
|
||||||
]
|
]
|
||||||
|
|
||||||
if build_machine.system() == 'linux'
|
if build_machine.system() == 'linux'
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
[package]
|
[package]
|
||||||
name = "gpu-screen-recorder"
|
name = "gpu-screen-recorder"
|
||||||
type = "executable"
|
type = "executable"
|
||||||
version = "5.12.5"
|
version = "5.13.0"
|
||||||
platforms = ["posix"]
|
platforms = ["posix"]
|
||||||
|
|
||||||
[config]
|
[config]
|
||||||
|
|||||||
@@ -17,17 +17,24 @@
|
|||||||
#endif
|
#endif
|
||||||
|
|
||||||
static const ArgEnum video_codec_enums[] = {
|
static const ArgEnum video_codec_enums[] = {
|
||||||
{ .name = "auto", .value = GSR_VIDEO_CODEC_AUTO },
|
{ .name = "auto", .value = GSR_VIDEO_CODEC_AUTO },
|
||||||
{ .name = "h264", .value = GSR_VIDEO_CODEC_H264 },
|
{ .name = "h264", .value = GSR_VIDEO_CODEC_H264 },
|
||||||
{ .name = "h265", .value = GSR_VIDEO_CODEC_HEVC },
|
{ .name = "h265", .value = GSR_VIDEO_CODEC_HEVC },
|
||||||
{ .name = "hevc", .value = GSR_VIDEO_CODEC_HEVC },
|
{ .name = "hevc", .value = GSR_VIDEO_CODEC_HEVC },
|
||||||
{ .name = "hevc_hdr", .value = GSR_VIDEO_CODEC_HEVC_HDR },
|
{ .name = "hevc_hdr", .value = GSR_VIDEO_CODEC_HEVC_HDR },
|
||||||
{ .name = "hevc_10bit", .value = GSR_VIDEO_CODEC_HEVC_10BIT },
|
{ .name = "hevc_10bit", .value = GSR_VIDEO_CODEC_HEVC_10BIT },
|
||||||
{ .name = "av1", .value = GSR_VIDEO_CODEC_AV1 },
|
{ .name = "av1", .value = GSR_VIDEO_CODEC_AV1 },
|
||||||
{ .name = "av1_hdr", .value = GSR_VIDEO_CODEC_AV1_HDR },
|
{ .name = "av1_hdr", .value = GSR_VIDEO_CODEC_AV1_HDR },
|
||||||
{ .name = "av1_10bit", .value = GSR_VIDEO_CODEC_AV1_10BIT },
|
{ .name = "av1_10bit", .value = GSR_VIDEO_CODEC_AV1_10BIT },
|
||||||
{ .name = "vp8", .value = GSR_VIDEO_CODEC_VP8 },
|
{ .name = "vp8", .value = GSR_VIDEO_CODEC_VP8 },
|
||||||
{ .name = "vp9", .value = GSR_VIDEO_CODEC_VP9 },
|
{ .name = "vp9", .value = GSR_VIDEO_CODEC_VP9 },
|
||||||
|
{ .name = "h264_vulkan", .value = GSR_VIDEO_CODEC_H264_VULKAN },
|
||||||
|
{ .name = "hevc_vulkan", .value = GSR_VIDEO_CODEC_HEVC_VULKAN },
|
||||||
|
{ .name = "hevc_hdr_vulkan", .value = GSR_VIDEO_CODEC_HEVC_HDR_VULKAN },
|
||||||
|
{ .name = "hevc_10bit_vulkan", .value = GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN },
|
||||||
|
{ .name = "av1_vulkan", .value = GSR_VIDEO_CODEC_AV1_VULKAN },
|
||||||
|
{ .name = "av1_hdr_vulkan", .value = GSR_VIDEO_CODEC_AV1_HDR_VULKAN },
|
||||||
|
{ .name = "av1_10bit_vulkan", .value = GSR_VIDEO_CODEC_AV1_10BIT_VULKAN },
|
||||||
};
|
};
|
||||||
|
|
||||||
static const ArgEnum audio_codec_enums[] = {
|
static const ArgEnum audio_codec_enums[] = {
|
||||||
|
|||||||
@@ -222,6 +222,13 @@ static size_t gsr_capture_v4l2_get_supported_resolutions(int fd, gsr_capture_v4l
|
|||||||
|
|
||||||
while(xioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fmt) == 0) {
|
while(xioctl(fd, VIDIOC_ENUM_FRAMESIZES, &fmt) == 0) {
|
||||||
if(fmt.type == V4L2_FRMSIZE_TYPE_DISCRETE && resolution_index < max_resolutions) {
|
if(fmt.type == V4L2_FRMSIZE_TYPE_DISCRETE && resolution_index < max_resolutions) {
|
||||||
|
// Skip unsupported resolutions for now (those that eglCreateImage cant import because of pitch hardware limitation).
|
||||||
|
// TODO: Find a fix for this.
|
||||||
|
if(pixfmt == GSR_CAPTURE_V4L2_PIXFMT_YUYV && (fmt.discrete.width % 128 != 0)) {
|
||||||
|
++fmt.index;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
resolutions[resolution_index] = (gsr_capture_v4l2_resolution){
|
resolutions[resolution_index] = (gsr_capture_v4l2_resolution){
|
||||||
.width = fmt.discrete.width,
|
.width = fmt.discrete.width,
|
||||||
.height = fmt.discrete.height,
|
.height = fmt.discrete.height,
|
||||||
@@ -305,6 +312,7 @@ uint32_t gsr_capture_v4l2_framerate_to_number(gsr_capture_v4l2_framerate framera
|
|||||||
return (uint32_t)((double)framerate.denominator / (double)framerate.numerator);
|
return (uint32_t)((double)framerate.denominator / (double)framerate.numerator);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO: Select the resolution closest to |camera_resolution|, if it's not 0, 0
|
||||||
static bool gsr_capture_v4l2_get_best_matching_setup(
|
static bool gsr_capture_v4l2_get_best_matching_setup(
|
||||||
const gsr_capture_v4l2_supported_setup *supported_setups,
|
const gsr_capture_v4l2_supported_setup *supported_setups,
|
||||||
size_t num_supported_setups,
|
size_t num_supported_setups,
|
||||||
|
|||||||
@@ -1,11 +1,13 @@
|
|||||||
#include "../../include/codec_query/vulkan.h"
|
#include "../../include/codec_query/vulkan.h"
|
||||||
|
#include "../../include/utils.h"
|
||||||
|
|
||||||
#include <stdio.h>
|
#include <stdio.h>
|
||||||
#include <string.h>
|
#include <string.h>
|
||||||
#include <stdlib.h>
|
#include <stdlib.h>
|
||||||
#include <xf86drm.h>
|
#include <xf86drm.h>
|
||||||
|
#include <dlfcn.h>
|
||||||
#define VK_NO_PROTOTYPES
|
#define VK_NO_PROTOTYPES
|
||||||
//#include <vulkan/vulkan.h>
|
#include <vulkan/vulkan.h>
|
||||||
|
|
||||||
#define MAX_PHYSICAL_DEVICES 32
|
#define MAX_PHYSICAL_DEVICES 32
|
||||||
|
|
||||||
@@ -21,15 +23,149 @@ static const char *required_device_extensions[] = {
|
|||||||
};
|
};
|
||||||
static int num_required_device_extensions = 8;
|
static int num_required_device_extensions = 8;
|
||||||
|
|
||||||
bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_codecs, const char *card_path, bool cleanup) {
|
static void set_h264_max_resolution(PFN_vkGetPhysicalDeviceVideoCapabilitiesKHR vkGetPhysicalDeviceVideoCapabilitiesKHR, VkPhysicalDevice physical_device, gsr_supported_video_codecs *video_codecs) {
|
||||||
|
const VkVideoEncodeH264ProfileInfoKHR h264_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_PROFILE_INFO_KHR,
|
||||||
|
.pNext = NULL,
|
||||||
|
.stdProfileIdc = STD_VIDEO_H264_PROFILE_IDC_HIGH
|
||||||
|
};
|
||||||
|
|
||||||
|
const VkVideoProfileInfoKHR video_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR,
|
||||||
|
.pNext = &h264_profile, // Chain the codec-specific profile
|
||||||
|
.videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_H264_BIT_KHR,
|
||||||
|
.chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR,
|
||||||
|
.lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR,
|
||||||
|
.chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoEncodeH264CapabilitiesKHR encode_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H264_CAPABILITIES_KHR,
|
||||||
|
.pNext = NULL
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoCapabilitiesKHR video_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR,
|
||||||
|
.pNext = &encode_caps
|
||||||
|
};
|
||||||
|
|
||||||
|
if (vkGetPhysicalDeviceVideoCapabilitiesKHR(physical_device, &video_profile, &video_caps) == VK_SUCCESS) {
|
||||||
|
video_codecs->h264.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->h264.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void set_hevc_max_resolution(PFN_vkGetPhysicalDeviceVideoCapabilitiesKHR vkGetPhysicalDeviceVideoCapabilitiesKHR, VkPhysicalDevice physical_device, gsr_supported_video_codecs *video_codecs) {
|
||||||
|
const VkVideoEncodeH265ProfileInfoKHR hevc_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_PROFILE_INFO_KHR,
|
||||||
|
.pNext = NULL,
|
||||||
|
.stdProfileIdc = STD_VIDEO_H265_PROFILE_IDC_MAIN
|
||||||
|
};
|
||||||
|
|
||||||
|
const VkVideoProfileInfoKHR video_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR,
|
||||||
|
.pNext = &hevc_profile, // Chain the codec-specific profile
|
||||||
|
.videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_H265_BIT_KHR,
|
||||||
|
.chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR,
|
||||||
|
.lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR,
|
||||||
|
.chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoEncodeH265CapabilitiesKHR encode_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_H265_CAPABILITIES_KHR,
|
||||||
|
.pNext = NULL
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoCapabilitiesKHR video_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR,
|
||||||
|
.pNext = &encode_caps
|
||||||
|
};
|
||||||
|
|
||||||
|
if (vkGetPhysicalDeviceVideoCapabilitiesKHR(physical_device, &video_profile, &video_caps) == VK_SUCCESS) {
|
||||||
|
video_codecs->hevc.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->hevc.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
|
||||||
|
video_codecs->hevc_hdr.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->hevc_hdr.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
|
||||||
|
video_codecs->hevc_10bit.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->hevc_10bit.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void set_av1_max_resolution(PFN_vkGetPhysicalDeviceVideoCapabilitiesKHR vkGetPhysicalDeviceVideoCapabilitiesKHR, VkPhysicalDevice physical_device, gsr_supported_video_codecs *video_codecs) {
|
||||||
|
const VkVideoEncodeAV1ProfileInfoKHR av1_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_PROFILE_INFO_KHR,
|
||||||
|
.pNext = NULL,
|
||||||
|
.stdProfile = STD_VIDEO_AV1_PROFILE_MAIN
|
||||||
|
};
|
||||||
|
|
||||||
|
const VkVideoProfileInfoKHR video_profile = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR,
|
||||||
|
.pNext = &av1_profile, // Chain the codec-specific profile
|
||||||
|
.videoCodecOperation = VK_VIDEO_CODEC_OPERATION_ENCODE_AV1_BIT_KHR,
|
||||||
|
.chromaSubsampling = VK_VIDEO_CHROMA_SUBSAMPLING_420_BIT_KHR,
|
||||||
|
.lumaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR,
|
||||||
|
.chromaBitDepth = VK_VIDEO_COMPONENT_BIT_DEPTH_8_BIT_KHR
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoEncodeH265CapabilitiesKHR encode_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_ENCODE_AV1_CAPABILITIES_KHR,
|
||||||
|
.pNext = NULL
|
||||||
|
};
|
||||||
|
|
||||||
|
VkVideoCapabilitiesKHR video_caps = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_VIDEO_CAPABILITIES_KHR,
|
||||||
|
.pNext = &encode_caps
|
||||||
|
};
|
||||||
|
|
||||||
|
if (vkGetPhysicalDeviceVideoCapabilitiesKHR(physical_device, &video_profile, &video_caps) == VK_SUCCESS) {
|
||||||
|
video_codecs->av1.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->av1.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
|
||||||
|
video_codecs->av1_hdr.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->av1_hdr.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
|
||||||
|
video_codecs->av1_10bit.max_resolution.x = video_caps.maxCodedExtent.width;
|
||||||
|
video_codecs->av1_10bit.max_resolution.y = video_caps.maxCodedExtent.height;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_codecs, const char *card_path, int *device_index_ret, bool cleanup) {
|
||||||
memset(video_codecs, 0, sizeof(*video_codecs));
|
memset(video_codecs, 0, sizeof(*video_codecs));
|
||||||
#if 0
|
*device_index_ret = 0;
|
||||||
|
|
||||||
bool success = false;
|
bool success = false;
|
||||||
|
void* libvulkan = NULL;
|
||||||
VkInstance instance = NULL;
|
VkInstance instance = NULL;
|
||||||
VkPhysicalDevice physical_devices[MAX_PHYSICAL_DEVICES];
|
VkPhysicalDevice physical_devices[MAX_PHYSICAL_DEVICES];
|
||||||
VkDevice device = NULL;
|
VkDevice device = NULL;
|
||||||
VkExtensionProperties *device_extensions = NULL;
|
VkExtensionProperties *device_extensions = NULL;
|
||||||
|
|
||||||
|
char render_path[128];
|
||||||
|
if(!gsr_card_path_get_render_path(card_path, render_path)) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: failed to get /dev/dri/renderDXXX file from %s\n", card_path);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
libvulkan = dlopen("libvulkan.so.1", RTLD_NOW);
|
||||||
|
if (!libvulkan) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: failed to load libvulkan.so.1, error: %s\n", dlerror());
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
PFN_vkGetInstanceProcAddr vkGetInstanceProcAddr = (PFN_vkGetInstanceProcAddr)dlsym(libvulkan, "vkGetInstanceProcAddr");
|
||||||
|
if (!vkGetInstanceProcAddr) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: could not find vkGetInstanceProcAddr in libvulkan.so.1\n");
|
||||||
|
goto done;
|
||||||
|
}
|
||||||
|
|
||||||
|
PFN_vkCreateInstance vkCreateInstance = (PFN_vkCreateInstance)vkGetInstanceProcAddr(NULL, "vkCreateInstance");
|
||||||
|
if(!vkCreateInstance) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: could not find vkCreateInstance in libvulkan.so.1\n");
|
||||||
|
goto done;
|
||||||
|
}
|
||||||
|
|
||||||
const VkApplicationInfo app_info = {
|
const VkApplicationInfo app_info = {
|
||||||
.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
|
.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO,
|
||||||
.pApplicationName = "GPU Screen Recorder",
|
.pApplicationName = "GPU Screen Recorder",
|
||||||
@@ -49,6 +185,26 @@ bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_cod
|
|||||||
goto done;
|
goto done;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
PFN_vkEnumeratePhysicalDevices vkEnumeratePhysicalDevices = NULL;
|
||||||
|
PFN_vkDestroyInstance vkDestroyInstance = NULL;
|
||||||
|
PFN_vkGetPhysicalDeviceProperties2 vkGetPhysicalDeviceProperties2 = NULL;
|
||||||
|
PFN_vkCreateDevice vkCreateDevice = NULL;
|
||||||
|
PFN_vkEnumerateDeviceExtensionProperties vkEnumerateDeviceExtensionProperties = NULL;
|
||||||
|
PFN_vkDestroyDevice vkDestroyDevice = NULL;
|
||||||
|
PFN_vkGetPhysicalDeviceVideoCapabilitiesKHR vkGetPhysicalDeviceVideoCapabilitiesKHR = NULL;
|
||||||
|
|
||||||
|
#define LOAD_INST(name) name = (PFN_##name)vkGetInstanceProcAddr(instance, #name); if(!name) { fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: could not find " #name " in libvulkan.so.1\n"); goto done; }
|
||||||
|
|
||||||
|
LOAD_INST(vkEnumeratePhysicalDevices)
|
||||||
|
LOAD_INST(vkDestroyInstance)
|
||||||
|
LOAD_INST(vkGetPhysicalDeviceProperties2)
|
||||||
|
LOAD_INST(vkCreateDevice)
|
||||||
|
LOAD_INST(vkEnumerateDeviceExtensionProperties)
|
||||||
|
LOAD_INST(vkDestroyDevice)
|
||||||
|
LOAD_INST(vkGetPhysicalDeviceVideoCapabilitiesKHR)
|
||||||
|
|
||||||
|
#undef LOAD_INST
|
||||||
|
|
||||||
uint32_t num_devices = 0;
|
uint32_t num_devices = 0;
|
||||||
if(vkEnumeratePhysicalDevices(instance, &num_devices, NULL) != VK_SUCCESS) {
|
if(vkEnumeratePhysicalDevices(instance, &num_devices, NULL) != VK_SUCCESS) {
|
||||||
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: vkEnumeratePhysicalDevices (query num devices) failed\n");
|
fprintf(stderr, "gsr error: gsr_get_supported_video_codecs_vulkan: vkEnumeratePhysicalDevices (query num devices) failed\n");
|
||||||
@@ -81,12 +237,13 @@ bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_cod
|
|||||||
};
|
};
|
||||||
vkGetPhysicalDeviceProperties2(physical_devices[i], &device_properties);
|
vkGetPhysicalDeviceProperties2(physical_devices[i], &device_properties);
|
||||||
|
|
||||||
if(!device_drm_properties.hasPrimary)
|
if(!device_drm_properties.hasRender)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
snprintf(device_card_path, sizeof(device_card_path), DRM_DEV_NAME, DRM_DIR_NAME, (int)device_drm_properties.primaryMinor);
|
snprintf(device_card_path, sizeof(device_card_path), DRM_RENDER_DEV_NAME, DRM_DIR_NAME, (int)device_drm_properties.renderMinor);
|
||||||
if(strcmp(device_card_path, card_path) == 0) {
|
if(strcmp(device_card_path, render_path) == 0) {
|
||||||
physical_device = physical_devices[i];
|
physical_device = physical_devices[i];
|
||||||
|
*device_index_ret = i;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -126,31 +283,36 @@ bool gsr_get_supported_video_codecs_vulkan(gsr_supported_video_codecs *video_cod
|
|||||||
|
|
||||||
for(uint32_t i = 0; i < num_device_extensions; ++i) {
|
for(uint32_t i = 0; i < num_device_extensions; ++i) {
|
||||||
if(strcmp(device_extensions[i].extensionName, "VK_KHR_video_encode_h264") == 0) {
|
if(strcmp(device_extensions[i].extensionName, "VK_KHR_video_encode_h264") == 0) {
|
||||||
video_codecs->h264 = true;
|
video_codecs->h264.supported = true;
|
||||||
} else if(strcmp(device_extensions[i].extensionName, "VK_KHR_video_encode_h265") == 0) {
|
} else if(strcmp(device_extensions[i].extensionName, "VK_KHR_video_encode_h265") == 0) {
|
||||||
// TODO: Verify if 10bit and hdr are actually supported
|
// TODO: Verify if 10bit and hdr are actually supported
|
||||||
video_codecs->hevc = true;
|
video_codecs->hevc.supported = true;
|
||||||
video_codecs->hevc_10bit = true;
|
video_codecs->hevc_10bit.supported = true;
|
||||||
video_codecs->hevc_hdr = true;
|
video_codecs->hevc_hdr.supported = true;
|
||||||
|
} else if(strcmp(device_extensions[i].extensionName, "VK_KHR_video_encode_av1") == 0) {
|
||||||
|
// TODO: Verify if 10bit and hdr are actually supported
|
||||||
|
video_codecs->av1.supported = true;
|
||||||
|
video_codecs->av1_10bit.supported = true;
|
||||||
|
video_codecs->av1_hdr.supported = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
set_h264_max_resolution(vkGetPhysicalDeviceVideoCapabilitiesKHR, physical_device, video_codecs);
|
||||||
|
set_hevc_max_resolution(vkGetPhysicalDeviceVideoCapabilitiesKHR, physical_device, video_codecs);
|
||||||
|
set_av1_max_resolution(vkGetPhysicalDeviceVideoCapabilitiesKHR, physical_device, video_codecs);
|
||||||
|
|
||||||
success = true;
|
success = true;
|
||||||
|
|
||||||
done:
|
done:
|
||||||
|
if(device_extensions)
|
||||||
|
free(device_extensions);
|
||||||
if(cleanup) {
|
if(cleanup) {
|
||||||
if(device)
|
if(device)
|
||||||
vkDestroyDevice(device, NULL);
|
vkDestroyDevice(device, NULL);
|
||||||
if(instance)
|
if(instance)
|
||||||
vkDestroyInstance(instance, NULL);
|
vkDestroyInstance(instance, NULL);
|
||||||
}
|
}
|
||||||
if(device_extensions)
|
if(libvulkan)
|
||||||
free(device_extensions);
|
dlclose(libvulkan);
|
||||||
return success;
|
return success;
|
||||||
#else
|
|
||||||
// TODO: Low power query
|
|
||||||
video_codecs->h264 = (gsr_supported_video_codec){ true, false };
|
|
||||||
video_codecs->hevc = (gsr_supported_video_codec){ true, false };
|
|
||||||
return true;
|
|
||||||
#endif
|
|
||||||
}
|
}
|
||||||
|
|||||||
52
src/defs.c
52
src/defs.c
@@ -2,10 +2,11 @@
|
|||||||
#include <assert.h>
|
#include <assert.h>
|
||||||
|
|
||||||
bool video_codec_is_hdr(gsr_video_codec video_codec) {
|
bool video_codec_is_hdr(gsr_video_codec video_codec) {
|
||||||
// TODO: Vulkan
|
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR:
|
case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR:
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
return true;
|
return true;
|
||||||
default:
|
default:
|
||||||
return false;
|
return false;
|
||||||
@@ -13,24 +14,30 @@ bool video_codec_is_hdr(gsr_video_codec video_codec) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
gsr_video_codec hdr_video_codec_to_sdr_video_codec(gsr_video_codec video_codec) {
|
gsr_video_codec hdr_video_codec_to_sdr_video_codec(gsr_video_codec video_codec) {
|
||||||
// TODO: Vulkan
|
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR:
|
case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
return GSR_VIDEO_CODEC_HEVC;
|
return GSR_VIDEO_CODEC_HEVC;
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR:
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
return GSR_VIDEO_CODEC_AV1;
|
return GSR_VIDEO_CODEC_AV1;
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
return GSR_VIDEO_CODEC_HEVC_VULKAN;
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
return GSR_VIDEO_CODEC_AV1_VULKAN;
|
||||||
default:
|
default:
|
||||||
return video_codec;
|
return video_codec;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
gsr_color_depth video_codec_to_bit_depth(gsr_video_codec video_codec) {
|
gsr_color_depth video_codec_to_bit_depth(gsr_video_codec video_codec) {
|
||||||
// TODO: 10-bit Vulkan
|
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR:
|
case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
case GSR_VIDEO_CODEC_HEVC_10BIT:
|
case GSR_VIDEO_CODEC_HEVC_10BIT:
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR:
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT:
|
case GSR_VIDEO_CODEC_AV1_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN:
|
||||||
return GSR_COLOR_DEPTH_10_BITS;
|
return GSR_COLOR_DEPTH_10_BITS;
|
||||||
default:
|
default:
|
||||||
return GSR_COLOR_DEPTH_8_BITS;
|
return GSR_COLOR_DEPTH_8_BITS;
|
||||||
@@ -39,28 +46,34 @@ gsr_color_depth video_codec_to_bit_depth(gsr_video_codec video_codec) {
|
|||||||
|
|
||||||
const char* video_codec_to_string(gsr_video_codec video_codec) {
|
const char* video_codec_to_string(gsr_video_codec video_codec) {
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_H264: return "h264";
|
case GSR_VIDEO_CODEC_H264: return "h264";
|
||||||
case GSR_VIDEO_CODEC_HEVC: return "hevc";
|
case GSR_VIDEO_CODEC_HEVC: return "hevc";
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR: return "hevc_hdr";
|
case GSR_VIDEO_CODEC_HEVC_HDR: return "hevc_hdr";
|
||||||
case GSR_VIDEO_CODEC_HEVC_10BIT: return "hevc_10bit";
|
case GSR_VIDEO_CODEC_HEVC_10BIT: return "hevc_10bit";
|
||||||
case GSR_VIDEO_CODEC_AV1: return "av1";
|
case GSR_VIDEO_CODEC_AV1: return "av1";
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR: return "av1_hdr";
|
case GSR_VIDEO_CODEC_AV1_HDR: return "av1_hdr";
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT: return "av1_10bit";
|
case GSR_VIDEO_CODEC_AV1_10BIT: return "av1_10bit";
|
||||||
case GSR_VIDEO_CODEC_VP8: return "vp8";
|
case GSR_VIDEO_CODEC_VP8: return "vp8";
|
||||||
case GSR_VIDEO_CODEC_VP9: return "vp9";
|
case GSR_VIDEO_CODEC_VP9: return "vp9";
|
||||||
case GSR_VIDEO_CODEC_H264_VULKAN: return "h264_vulkan";
|
case GSR_VIDEO_CODEC_H264_VULKAN: return "h264_vulkan";
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN: return "hevc_vulkan";
|
case GSR_VIDEO_CODEC_HEVC_VULKAN: return "hevc_vulkan";
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN: return "hevc_hdr_vulkan";
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN: return "hevc_10bit_vulkan";
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN: return "av1_vulkan";
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN: return "av1_hdr_vulkan";
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN: return "av1_10bit_vulkan";
|
||||||
}
|
}
|
||||||
return "";
|
return "";
|
||||||
}
|
}
|
||||||
|
|
||||||
// bool video_codec_is_hevc(gsr_video_codec video_codec) {
|
// bool video_codec_is_hevc(gsr_video_codec video_codec) {
|
||||||
// // TODO: 10-bit vulkan
|
|
||||||
// switch(video_codec) {
|
// switch(video_codec) {
|
||||||
// case GSR_VIDEO_CODEC_HEVC:
|
// case GSR_VIDEO_CODEC_HEVC:
|
||||||
// case GSR_VIDEO_CODEC_HEVC_HDR:
|
// case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
// case GSR_VIDEO_CODEC_HEVC_10BIT:
|
// case GSR_VIDEO_CODEC_HEVC_10BIT:
|
||||||
// case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
// case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
||||||
|
// case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
// case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN:
|
||||||
// return true;
|
// return true;
|
||||||
// default:
|
// default:
|
||||||
// return false;
|
// return false;
|
||||||
@@ -68,11 +81,13 @@ const char* video_codec_to_string(gsr_video_codec video_codec) {
|
|||||||
// }
|
// }
|
||||||
|
|
||||||
bool video_codec_is_av1(gsr_video_codec video_codec) {
|
bool video_codec_is_av1(gsr_video_codec video_codec) {
|
||||||
// TODO: Vulkan
|
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_AV1:
|
case GSR_VIDEO_CODEC_AV1:
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR:
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT:
|
case GSR_VIDEO_CODEC_AV1_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN:
|
||||||
return true;
|
return true;
|
||||||
default:
|
default:
|
||||||
return false;
|
return false;
|
||||||
@@ -83,6 +98,11 @@ bool video_codec_is_vulkan(gsr_video_codec video_codec) {
|
|||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_H264_VULKAN:
|
case GSR_VIDEO_CODEC_H264_VULKAN:
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN:
|
||||||
return true;
|
return true;
|
||||||
default:
|
default:
|
||||||
return false;
|
return false;
|
||||||
|
|||||||
@@ -216,6 +216,10 @@ static bool gsr_egl_proc_load_egl(gsr_egl *self) {
|
|||||||
self->glNamedBufferStorageMemEXT = (FUNC_glNamedBufferStorageMemEXT)self->eglGetProcAddress("glNamedBufferStorageMemEXT");
|
self->glNamedBufferStorageMemEXT = (FUNC_glNamedBufferStorageMemEXT)self->eglGetProcAddress("glNamedBufferStorageMemEXT");
|
||||||
self->glMemoryObjectParameterivEXT = (FUNC_glMemoryObjectParameterivEXT)self->eglGetProcAddress("glMemoryObjectParameterivEXT");
|
self->glMemoryObjectParameterivEXT = (FUNC_glMemoryObjectParameterivEXT)self->eglGetProcAddress("glMemoryObjectParameterivEXT");
|
||||||
|
|
||||||
|
self->glGenSemaphoresEXT = (FUNC_glGenSemaphoresEXT)self->eglGetProcAddress("glGenSemaphoresEXT");
|
||||||
|
self->glImportSemaphoreFdEXT = (FUNC_glImportSemaphoreFdEXT)self->eglGetProcAddress("glImportSemaphoreFdEXT");
|
||||||
|
self->glSignalSemaphoreEXT = (FUNC_glSignalSemaphoreEXT)self->eglGetProcAddress("glSignalSemaphoreEXT");
|
||||||
|
|
||||||
if(!self->eglExportDMABUFImageQueryMESA) {
|
if(!self->eglExportDMABUFImageQueryMESA) {
|
||||||
fprintf(stderr, "gsr error: gsr_egl_load failed: could not find eglExportDMABUFImageQueryMESA\n");
|
fprintf(stderr, "gsr error: gsr_egl_load failed: could not find eglExportDMABUFImageQueryMESA\n");
|
||||||
return false;
|
return false;
|
||||||
|
|||||||
@@ -4,29 +4,120 @@
|
|||||||
|
|
||||||
#include <libavcodec/avcodec.h>
|
#include <libavcodec/avcodec.h>
|
||||||
#define VK_NO_PROTOTYPES
|
#define VK_NO_PROTOTYPES
|
||||||
//#include <libavutil/hwcontext_vulkan.h>
|
#include <libavutil/hwcontext_vulkan.h>
|
||||||
|
|
||||||
//#include <vulkan/vulkan_core.h>
|
#include <vulkan/vulkan_core.h>
|
||||||
|
|
||||||
#define GL_HANDLE_TYPE_OPAQUE_FD_EXT 0x9586
|
#define GL_HANDLE_TYPE_OPAQUE_FD_EXT 0x9586
|
||||||
#define GL_TEXTURE_TILING_EXT 0x9580
|
#define GL_TEXTURE_TILING_EXT 0x9580
|
||||||
#define GL_OPTIMAL_TILING_EXT 0x9584
|
#define GL_OPTIMAL_TILING_EXT 0x9584
|
||||||
#define GL_LINEAR_TILING_EXT 0x9585
|
#define GL_LINEAR_TILING_EXT 0x9585
|
||||||
|
#define GL_DEDICATED_MEMORY_OBJECT_EXT 0x9581
|
||||||
|
#define GL_LAYOUT_GENERAL_EXT 0x958D
|
||||||
|
|
||||||
|
typedef struct {
|
||||||
|
PFN_vkCreateImage vkCreateImage;
|
||||||
|
PFN_vkDestroyImage vkDestroyImage;
|
||||||
|
PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements;
|
||||||
|
PFN_vkAllocateMemory vkAllocateMemory;
|
||||||
|
PFN_vkFreeMemory vkFreeMemory;
|
||||||
|
PFN_vkBindImageMemory vkBindImageMemory;
|
||||||
|
PFN_vkGetMemoryFdKHR vkGetMemoryFdKHR;
|
||||||
|
PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties;
|
||||||
|
PFN_vkCreateCommandPool vkCreateCommandPool;
|
||||||
|
PFN_vkDestroyCommandPool vkDestroyCommandPool;
|
||||||
|
PFN_vkAllocateCommandBuffers vkAllocateCommandBuffers;
|
||||||
|
PFN_vkBeginCommandBuffer vkBeginCommandBuffer;
|
||||||
|
PFN_vkEndCommandBuffer vkEndCommandBuffer;
|
||||||
|
PFN_vkCmdPipelineBarrier vkCmdPipelineBarrier;
|
||||||
|
PFN_vkCmdCopyImage vkCmdCopyImage;
|
||||||
|
PFN_vkCreateFence vkCreateFence;
|
||||||
|
PFN_vkDestroyFence vkDestroyFence;
|
||||||
|
PFN_vkResetFences vkResetFences;
|
||||||
|
PFN_vkWaitForFences vkWaitForFences;
|
||||||
|
PFN_vkGetDeviceQueue vkGetDeviceQueue;
|
||||||
|
PFN_vkQueueSubmit vkQueueSubmit;
|
||||||
|
PFN_vkResetCommandBuffer vkResetCommandBuffer;
|
||||||
|
PFN_vkCreateSemaphore vkCreateSemaphore;
|
||||||
|
PFN_vkDestroySemaphore vkDestroySemaphore;
|
||||||
|
PFN_vkGetSemaphoreFdKHR vkGetSemaphoreFdKHR;
|
||||||
|
} gsr_vk_funcs;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
gsr_video_encoder_vulkan_params params;
|
gsr_video_encoder_vulkan_params params;
|
||||||
unsigned int target_textures[2];
|
unsigned int target_textures[2];
|
||||||
vec2i texture_sizes[2];
|
vec2i texture_sizes[2];
|
||||||
AVBufferRef *device_ctx;
|
AVBufferRef *device_ctx;
|
||||||
|
|
||||||
|
gsr_vk_funcs vk;
|
||||||
|
VkDevice vk_device;
|
||||||
|
VkQueue vk_queue;
|
||||||
|
|
||||||
|
/* Exportable images that GL renders into */
|
||||||
|
VkImage export_images[2];
|
||||||
|
VkDeviceMemory export_memory[2];
|
||||||
|
VkDeviceSize export_memory_size[2];
|
||||||
|
unsigned int gl_memory_objects[2];
|
||||||
|
|
||||||
|
/* Vulkan command infrastructure for copying to encoder frame */
|
||||||
|
VkCommandPool command_pool;
|
||||||
|
VkCommandBuffer command_buffer;
|
||||||
|
VkFence fence;
|
||||||
|
bool fence_submitted; /* true if the fence was submitted and not yet waited on */
|
||||||
|
|
||||||
|
/* GL→Vulkan semaphore (binary, exported to GL via GL_EXT_semaphore_fd) */
|
||||||
|
VkSemaphore gl_ready_semaphore;
|
||||||
|
unsigned int gl_semaphore;
|
||||||
} gsr_video_encoder_vulkan;
|
} gsr_video_encoder_vulkan;
|
||||||
|
|
||||||
|
static bool gsr_vk_funcs_load(gsr_vk_funcs *vk, PFN_vkGetInstanceProcAddr get_inst_proc, VkInstance inst, VkDevice dev) {
|
||||||
|
PFN_vkGetDeviceProcAddr get_dev_proc = (PFN_vkGetDeviceProcAddr)get_inst_proc(inst, "vkGetDeviceProcAddr");
|
||||||
|
if(!get_dev_proc) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_vk_funcs_load: failed to load vkGetDeviceProcAddr\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
#define LOAD_INST(name) vk->name = (PFN_##name)get_inst_proc(inst, #name); if(!vk->name) { fprintf(stderr, "gsr error: gsr_vk_funcs_load: failed to load " #name "\n"); return false; }
|
||||||
|
#define LOAD_DEV(name) vk->name = (PFN_##name)get_dev_proc(dev, #name); if(!vk->name) { fprintf(stderr, "gsr error: gsr_vk_funcs_load: failed to load " #name "\n"); return false; }
|
||||||
|
|
||||||
|
LOAD_INST(vkGetPhysicalDeviceMemoryProperties)
|
||||||
|
LOAD_DEV(vkCreateImage)
|
||||||
|
LOAD_DEV(vkDestroyImage)
|
||||||
|
LOAD_DEV(vkGetImageMemoryRequirements)
|
||||||
|
LOAD_DEV(vkAllocateMemory)
|
||||||
|
LOAD_DEV(vkFreeMemory)
|
||||||
|
LOAD_DEV(vkBindImageMemory)
|
||||||
|
LOAD_DEV(vkGetMemoryFdKHR)
|
||||||
|
LOAD_DEV(vkCreateCommandPool)
|
||||||
|
LOAD_DEV(vkDestroyCommandPool)
|
||||||
|
LOAD_DEV(vkAllocateCommandBuffers)
|
||||||
|
LOAD_DEV(vkBeginCommandBuffer)
|
||||||
|
LOAD_DEV(vkEndCommandBuffer)
|
||||||
|
LOAD_DEV(vkCmdPipelineBarrier)
|
||||||
|
LOAD_DEV(vkCmdCopyImage)
|
||||||
|
LOAD_DEV(vkCreateFence)
|
||||||
|
LOAD_DEV(vkDestroyFence)
|
||||||
|
LOAD_DEV(vkResetFences)
|
||||||
|
LOAD_DEV(vkWaitForFences)
|
||||||
|
LOAD_DEV(vkGetDeviceQueue)
|
||||||
|
LOAD_DEV(vkQueueSubmit)
|
||||||
|
LOAD_DEV(vkResetCommandBuffer)
|
||||||
|
LOAD_DEV(vkCreateSemaphore)
|
||||||
|
LOAD_DEV(vkDestroySemaphore)
|
||||||
|
LOAD_DEV(vkGetSemaphoreFdKHR)
|
||||||
|
|
||||||
|
#undef LOAD_INST
|
||||||
|
#undef LOAD_DEV
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
static bool gsr_video_encoder_vulkan_setup_context(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context) {
|
static bool gsr_video_encoder_vulkan_setup_context(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context) {
|
||||||
AVDictionary *options = NULL;
|
AVDictionary *options = NULL;
|
||||||
//av_dict_set(&options, "linear_images", "1", 0);
|
|
||||||
//av_dict_set(&options, "disable_multiplane", "1", 0);
|
char device_index_str[32];
|
||||||
#if 0
|
snprintf(device_index_str, sizeof(device_index_str), "%d", self->params.egl->vulkan_device_index);
|
||||||
// TODO: Use correct device
|
|
||||||
if(av_hwdevice_ctx_create(&self->device_ctx, AV_HWDEVICE_TYPE_VULKAN, NULL, options, 0) < 0) {
|
if(av_hwdevice_ctx_create(&self->device_ctx, AV_HWDEVICE_TYPE_VULKAN, device_index_str, options, 0) < 0) {
|
||||||
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_context: failed to create hardware device context\n");
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_context: failed to create hardware device context\n");
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@@ -45,23 +136,19 @@ static bool gsr_video_encoder_vulkan_setup_context(gsr_video_encoder_vulkan *sel
|
|||||||
hw_frame_context->format = video_codec_context->pix_fmt;
|
hw_frame_context->format = video_codec_context->pix_fmt;
|
||||||
hw_frame_context->device_ctx = (AVHWDeviceContext*)self->device_ctx->data;
|
hw_frame_context->device_ctx = (AVHWDeviceContext*)self->device_ctx->data;
|
||||||
|
|
||||||
//AVVulkanFramesContext *vk_frame_ctx = (AVVulkanFramesContext*)hw_frame_context->hwctx;
|
|
||||||
//hw_frame_context->initial_pool_size = 20;
|
|
||||||
|
|
||||||
if (av_hwframe_ctx_init(frame_context) < 0) {
|
if (av_hwframe_ctx_init(frame_context) < 0) {
|
||||||
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_context: failed to initialize hardware frame context "
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_context: failed to initialize hardware frame context "
|
||||||
"(note: ffmpeg version needs to be > 4.0)\n");
|
"(note: ffmpeg version needs to be > 4.0)\n");
|
||||||
av_buffer_unref(&self->device_ctx);
|
av_buffer_unref(&self->device_ctx);
|
||||||
//av_buffer_unref(&frame_context);
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
video_codec_context->hw_frames_ctx = av_buffer_ref(frame_context);
|
video_codec_context->hw_frames_ctx = av_buffer_ref(frame_context);
|
||||||
av_buffer_unref(&frame_context);
|
av_buffer_unref(&frame_context);
|
||||||
#endif
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
#if 0
|
|
||||||
static AVVulkanDeviceContext* video_codec_context_get_vulkan_data(AVCodecContext *video_codec_context) {
|
static AVVulkanDeviceContext* video_codec_context_get_vulkan_data(AVCodecContext *video_codec_context) {
|
||||||
AVBufferRef *hw_frames_ctx = video_codec_context->hw_frames_ctx;
|
AVBufferRef *hw_frames_ctx = video_codec_context->hw_frames_ctx;
|
||||||
if(!hw_frames_ctx)
|
if(!hw_frames_ctx)
|
||||||
@@ -75,24 +162,118 @@ static AVVulkanDeviceContext* video_codec_context_get_vulkan_data(AVCodecContext
|
|||||||
return (AVVulkanDeviceContext*)device_context->hwctx;
|
return (AVVulkanDeviceContext*)device_context->hwctx;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int get_graphics_queue_family(AVVulkanDeviceContext *vv) {
|
||||||
|
for(int i = 0; i < vv->nb_qf; i++) {
|
||||||
|
if(vv->qf[i].flags & VK_QUEUE_GRAPHICS_BIT)
|
||||||
|
return vv->qf[i].idx;
|
||||||
|
}
|
||||||
|
/* Fall back to any queue that supports transfer */
|
||||||
|
for(int i = 0; i < vv->nb_qf; i++) {
|
||||||
|
if(vv->qf[i].flags & VK_QUEUE_TRANSFER_BIT)
|
||||||
|
return vv->qf[i].idx;
|
||||||
|
}
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
static uint32_t get_memory_type_idx(VkPhysicalDevice pdev, const VkMemoryRequirements *mem_reqs, VkMemoryPropertyFlagBits prop_flags, PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties) {
|
static uint32_t get_memory_type_idx(VkPhysicalDevice pdev, const VkMemoryRequirements *mem_reqs, VkMemoryPropertyFlagBits prop_flags, PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties) {
|
||||||
VkPhysicalDeviceMemoryProperties pdev_mem_props;
|
VkPhysicalDeviceMemoryProperties pdev_mem_props;
|
||||||
uint32_t i;
|
|
||||||
|
|
||||||
vkGetPhysicalDeviceMemoryProperties(pdev, &pdev_mem_props);
|
vkGetPhysicalDeviceMemoryProperties(pdev, &pdev_mem_props);
|
||||||
|
|
||||||
for (i = 0; i < pdev_mem_props.memoryTypeCount; i++) {
|
for(uint32_t i = 0; i < pdev_mem_props.memoryTypeCount; i++) {
|
||||||
const VkMemoryType *type = &pdev_mem_props.memoryTypes[i];
|
if((mem_reqs->memoryTypeBits & (1 << i)) &&
|
||||||
|
(pdev_mem_props.memoryTypes[i].propertyFlags & prop_flags) == prop_flags) {
|
||||||
if ((mem_reqs->memoryTypeBits & (1 << i)) &&
|
|
||||||
(type->propertyFlags & prop_flags) == prop_flags) {
|
|
||||||
return i;
|
return i;
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return UINT32_MAX;
|
return UINT32_MAX;
|
||||||
}
|
}
|
||||||
#endif
|
|
||||||
|
static bool create_exportable_image(
|
||||||
|
const gsr_vk_funcs *vk,
|
||||||
|
VkDevice dev,
|
||||||
|
VkPhysicalDevice phys_dev,
|
||||||
|
int width, int height,
|
||||||
|
VkFormat format,
|
||||||
|
VkImage *out_image,
|
||||||
|
VkDeviceMemory *out_memory,
|
||||||
|
VkDeviceSize *out_size)
|
||||||
|
{
|
||||||
|
VkExternalMemoryImageCreateInfo ext_img_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_EXTERNAL_MEMORY_IMAGE_CREATE_INFO,
|
||||||
|
.handleTypes = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT,
|
||||||
|
};
|
||||||
|
VkImageCreateInfo img_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO,
|
||||||
|
.pNext = &ext_img_info,
|
||||||
|
.imageType = VK_IMAGE_TYPE_2D,
|
||||||
|
.format = format,
|
||||||
|
.extent = { (uint32_t)width, (uint32_t)height, 1 },
|
||||||
|
.mipLevels = 1,
|
||||||
|
.arrayLayers = 1,
|
||||||
|
.samples = VK_SAMPLE_COUNT_1_BIT,
|
||||||
|
.tiling = VK_IMAGE_TILING_OPTIMAL,
|
||||||
|
.usage = VK_IMAGE_USAGE_TRANSFER_SRC_BIT |
|
||||||
|
VK_IMAGE_USAGE_SAMPLED_BIT |
|
||||||
|
VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
|
||||||
|
VK_IMAGE_USAGE_STORAGE_BIT,
|
||||||
|
.flags = VK_IMAGE_CREATE_ALIAS_BIT | VK_IMAGE_CREATE_MUTABLE_FORMAT_BIT,
|
||||||
|
.sharingMode = VK_SHARING_MODE_EXCLUSIVE,
|
||||||
|
.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED,
|
||||||
|
};
|
||||||
|
|
||||||
|
if(vk->vkCreateImage(dev, &img_info, NULL, out_image) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: create_exportable_image: vkCreateImage failed\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
VkMemoryRequirements mem_reqs;
|
||||||
|
vk->vkGetImageMemoryRequirements(dev, *out_image, &mem_reqs);
|
||||||
|
|
||||||
|
uint32_t mem_type_idx = get_memory_type_idx(phys_dev, &mem_reqs,
|
||||||
|
VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, vk->vkGetPhysicalDeviceMemoryProperties);
|
||||||
|
if(mem_type_idx == UINT32_MAX) {
|
||||||
|
fprintf(stderr, "gsr error: create_exportable_image: no suitable memory type\n");
|
||||||
|
vk->vkDestroyImage(dev, *out_image, NULL);
|
||||||
|
*out_image = VK_NULL_HANDLE;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
VkMemoryDedicatedAllocateInfo ded_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO,
|
||||||
|
.image = *out_image,
|
||||||
|
};
|
||||||
|
VkExportMemoryAllocateInfo exp_mem_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO,
|
||||||
|
.pNext = &ded_info,
|
||||||
|
.handleTypes = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT,
|
||||||
|
};
|
||||||
|
VkMemoryAllocateInfo mem_alloc_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO,
|
||||||
|
.pNext = &exp_mem_info,
|
||||||
|
.allocationSize = mem_reqs.size,
|
||||||
|
.memoryTypeIndex = mem_type_idx,
|
||||||
|
};
|
||||||
|
|
||||||
|
if(vk->vkAllocateMemory(dev, &mem_alloc_info, NULL, out_memory) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: create_exportable_image: vkAllocateMemory failed\n");
|
||||||
|
vk->vkDestroyImage(dev, *out_image, NULL);
|
||||||
|
*out_image = VK_NULL_HANDLE;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if(vk->vkBindImageMemory(dev, *out_image, *out_memory, 0) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: create_exportable_image: vkBindImageMemory failed\n");
|
||||||
|
vk->vkFreeMemory(dev, *out_memory, NULL);
|
||||||
|
vk->vkDestroyImage(dev, *out_image, NULL);
|
||||||
|
*out_memory = VK_NULL_HANDLE;
|
||||||
|
*out_image = VK_NULL_HANDLE;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
*out_size = mem_reqs.size;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
static bool gsr_video_encoder_vulkan_setup_textures(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context, AVFrame *frame) {
|
static bool gsr_video_encoder_vulkan_setup_textures(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context, AVFrame *frame) {
|
||||||
const int res = av_hwframe_get_buffer(video_codec_context->hw_frames_ctx, frame, 0);
|
const int res = av_hwframe_get_buffer(video_codec_context->hw_frames_ctx, frame, 0);
|
||||||
if(res < 0) {
|
if(res < 0) {
|
||||||
@@ -101,135 +282,179 @@ static bool gsr_video_encoder_vulkan_setup_textures(gsr_video_encoder_vulkan *se
|
|||||||
}
|
}
|
||||||
|
|
||||||
while(self->params.egl->glGetError()) {}
|
while(self->params.egl->glGetError()) {}
|
||||||
#if 0
|
|
||||||
AVVkFrame *target_surface_id = (AVVkFrame*)frame->data[0];
|
|
||||||
AVVulkanDeviceContext* vv = video_codec_context_get_vulkan_data(video_codec_context);
|
|
||||||
const size_t luma_size = frame->width * frame->height;
|
|
||||||
if(vv) {
|
|
||||||
PFN_vkGetImageMemoryRequirements vkGetImageMemoryRequirements = (PFN_vkGetImageMemoryRequirements)vv->get_proc_addr(vv->inst, "vkGetImageMemoryRequirements");
|
|
||||||
PFN_vkAllocateMemory vkAllocateMemory = (PFN_vkAllocateMemory)vv->get_proc_addr(vv->inst, "vkAllocateMemory");
|
|
||||||
PFN_vkGetPhysicalDeviceMemoryProperties vkGetPhysicalDeviceMemoryProperties = (PFN_vkGetPhysicalDeviceMemoryProperties)vv->get_proc_addr(vv->inst, "vkGetPhysicalDeviceMemoryProperties");
|
|
||||||
PFN_vkGetMemoryFdKHR vkGetMemoryFdKHR = (PFN_vkGetMemoryFdKHR)vv->get_proc_addr(vv->inst, "vkGetMemoryFdKHR");
|
|
||||||
|
|
||||||
VkMemoryRequirements mem_reqs = {0};
|
AVVulkanDeviceContext *vv = video_codec_context_get_vulkan_data(video_codec_context);
|
||||||
vkGetImageMemoryRequirements(vv->act_dev, target_surface_id->img[0], &mem_reqs);
|
if(!vv) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: failed to get vulkan device context\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
fprintf(stderr, "size: %lu, alignment: %lu, memory bits: 0x%08x\n", mem_reqs.size, mem_reqs.alignment, mem_reqs.memoryTypeBits);
|
if(!gsr_vk_funcs_load(&self->vk, vv->get_proc_addr, vv->inst, vv->act_dev))
|
||||||
VkDeviceMemory mem;
|
return false;
|
||||||
{
|
|
||||||
VkExportMemoryAllocateInfo exp_mem_info;
|
|
||||||
VkMemoryAllocateInfo mem_alloc_info;
|
|
||||||
VkMemoryDedicatedAllocateInfoKHR ded_info;
|
|
||||||
|
|
||||||
memset(&exp_mem_info, 0, sizeof(exp_mem_info));
|
self->vk_device = vv->act_dev;
|
||||||
exp_mem_info.sType = VK_STRUCTURE_TYPE_EXPORT_MEMORY_ALLOCATE_INFO;
|
|
||||||
exp_mem_info.handleTypes = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT;
|
|
||||||
|
|
||||||
memset(&ded_info, 0, sizeof(ded_info));
|
|
||||||
ded_info.sType = VK_STRUCTURE_TYPE_MEMORY_DEDICATED_ALLOCATE_INFO;
|
|
||||||
ded_info.image = target_surface_id->img[0];
|
|
||||||
|
|
||||||
exp_mem_info.pNext = &ded_info;
|
const bool is_p010 = self->params.color_depth == GSR_COLOR_DEPTH_10_BITS;
|
||||||
|
const VkFormat fmt_y = is_p010 ? VK_FORMAT_R16_UNORM : VK_FORMAT_R8_UNORM;
|
||||||
|
const VkFormat fmt_uv = is_p010 ? VK_FORMAT_R16G16_UNORM : VK_FORMAT_R8G8_UNORM;
|
||||||
|
const unsigned int gl_fmt_y = is_p010 ? GL_R16 : GL_R8;
|
||||||
|
const unsigned int gl_fmt_uv = is_p010 ? GL_RG16 : GL_RG8;
|
||||||
|
|
||||||
memset(&mem_alloc_info, 0, sizeof(mem_alloc_info));
|
if(!create_exportable_image(&self->vk, vv->act_dev, vv->phys_dev,
|
||||||
mem_alloc_info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
|
frame->width, frame->height, fmt_y,
|
||||||
mem_alloc_info.pNext = &exp_mem_info;
|
&self->export_images[0], &self->export_memory[0], &self->export_memory_size[0]))
|
||||||
mem_alloc_info.allocationSize = target_surface_id->size[0];
|
return false;
|
||||||
mem_alloc_info.memoryTypeIndex = get_memory_type_idx(vv->phys_dev, &mem_reqs, VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, vkGetPhysicalDeviceMemoryProperties);
|
|
||||||
|
|
||||||
if (mem_alloc_info.memoryTypeIndex == UINT32_MAX) {
|
if(!create_exportable_image(&self->vk, vv->act_dev, vv->phys_dev,
|
||||||
fprintf(stderr, "No suitable memory type index found.\n");
|
frame->width / 2, frame->height / 2, fmt_uv,
|
||||||
return VK_NULL_HANDLE;
|
&self->export_images[1], &self->export_memory[1], &self->export_memory_size[1]))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
/* Export Vulkan memory as FDs and import into GL */
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
VkMemoryGetFdInfoKHR fd_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_MEMORY_GET_FD_INFO_KHR,
|
||||||
|
.memory = self->export_memory[i],
|
||||||
|
.handleType = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT,
|
||||||
|
};
|
||||||
|
int fd = -1;
|
||||||
|
if(self->vk.vkGetMemoryFdKHR(vv->act_dev, &fd_info, &fd) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: vkGetMemoryFdKHR failed for plane %d\n", i);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
self->params.egl->glCreateMemoryObjectsEXT(1, &self->gl_memory_objects[i]);
|
||||||
|
const int dedicated = 1;
|
||||||
|
self->params.egl->glMemoryObjectParameterivEXT(self->gl_memory_objects[i],
|
||||||
|
GL_DEDICATED_MEMORY_OBJECT_EXT, &dedicated);
|
||||||
|
self->params.egl->glImportMemoryFdEXT(self->gl_memory_objects[i], self->export_memory_size[i],
|
||||||
|
GL_HANDLE_TYPE_OPAQUE_FD_EXT, fd);
|
||||||
|
if(!self->params.egl->glIsMemoryObjectEXT(self->gl_memory_objects[i])) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: failed to import memory FD for plane %d\n", i);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Create GL textures backed by the exportable images */
|
||||||
|
self->params.egl->glGenTextures(2, self->target_textures);
|
||||||
|
|
||||||
|
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->target_textures[0]);
|
||||||
|
self->params.egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_TILING_EXT, GL_OPTIMAL_TILING_EXT);
|
||||||
|
self->params.egl->glTexStorageMem2DEXT(GL_TEXTURE_2D, 1, gl_fmt_y,
|
||||||
|
frame->width, frame->height,
|
||||||
|
self->gl_memory_objects[0], 0);
|
||||||
|
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
||||||
|
|
||||||
|
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->target_textures[1]);
|
||||||
|
self->params.egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_TILING_EXT, GL_OPTIMAL_TILING_EXT);
|
||||||
|
self->params.egl->glTexStorageMem2DEXT(GL_TEXTURE_2D, 1, gl_fmt_uv,
|
||||||
|
frame->width / 2, frame->height / 2,
|
||||||
|
self->gl_memory_objects[1], 0);
|
||||||
|
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
||||||
|
|
||||||
|
self->texture_sizes[0] = (vec2i){ frame->width, frame->height };
|
||||||
|
self->texture_sizes[1] = (vec2i){ frame->width / 2, frame->height / 2 };
|
||||||
|
|
||||||
|
/* Set up Vulkan command infrastructure */
|
||||||
|
VkCommandPoolCreateInfo pool_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO,
|
||||||
|
.flags = VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT,
|
||||||
|
.queueFamilyIndex = (uint32_t)get_graphics_queue_family(vv),
|
||||||
|
};
|
||||||
|
if(self->vk.vkCreateCommandPool(vv->act_dev, &pool_info, NULL, &self->command_pool) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: vkCreateCommandPool failed\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
VkCommandBufferAllocateInfo cb_alloc_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO,
|
||||||
|
.commandPool = self->command_pool,
|
||||||
|
.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY,
|
||||||
|
.commandBufferCount = 1,
|
||||||
|
};
|
||||||
|
if(self->vk.vkAllocateCommandBuffers(vv->act_dev, &cb_alloc_info, &self->command_buffer) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: vkAllocateCommandBuffers failed\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
VkFenceCreateInfo fence_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_FENCE_CREATE_INFO,
|
||||||
|
};
|
||||||
|
if(self->vk.vkCreateFence(vv->act_dev, &fence_info, NULL, &self->fence) != VK_SUCCESS) {
|
||||||
|
fprintf(stderr, "gsr error: gsr_video_encoder_vulkan_setup_textures: vkCreateFence failed\n");
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
self->vk.vkGetDeviceQueue(vv->act_dev, (uint32_t)get_graphics_queue_family(vv), 0, &self->vk_queue);
|
||||||
|
|
||||||
|
/* Transition export images UNDEFINED → GENERAL so GL can use them */
|
||||||
|
VkCommandBufferBeginInfo begin_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,
|
||||||
|
.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT,
|
||||||
|
};
|
||||||
|
self->vk.vkBeginCommandBuffer(self->command_buffer, &begin_info);
|
||||||
|
|
||||||
|
VkImageMemoryBarrier init_barriers[2];
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
init_barriers[i] = (VkImageMemoryBarrier){
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = 0,
|
||||||
|
.dstAccessMask = 0,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_UNDEFINED,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = self->export_images[i],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 2, init_barriers);
|
||||||
|
|
||||||
|
self->vk.vkEndCommandBuffer(self->command_buffer);
|
||||||
|
|
||||||
|
VkSubmitInfo submit_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO,
|
||||||
|
.commandBufferCount = 1,
|
||||||
|
.pCommandBuffers = &self->command_buffer,
|
||||||
|
};
|
||||||
|
self->vk.vkQueueSubmit(self->vk_queue, 1, &submit_info, self->fence);
|
||||||
|
self->vk.vkWaitForFences(vv->act_dev, 1, &self->fence, VK_TRUE, UINT64_MAX);
|
||||||
|
self->vk.vkResetFences(vv->act_dev, 1, &self->fence);
|
||||||
|
self->vk.vkResetCommandBuffer(self->command_buffer, 0);
|
||||||
|
|
||||||
|
/* Create GL→Vulkan sync semaphore (binary, exported via OPAQUE_FD) */
|
||||||
|
if(self->params.egl->glGenSemaphoresEXT && self->params.egl->glImportSemaphoreFdEXT) {
|
||||||
|
VkExportSemaphoreCreateInfo exp_sem_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_EXPORT_SEMAPHORE_CREATE_INFO,
|
||||||
|
.handleTypes = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD_BIT,
|
||||||
|
};
|
||||||
|
VkSemaphoreCreateInfo sem_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_CREATE_INFO,
|
||||||
|
.pNext = &exp_sem_info,
|
||||||
|
};
|
||||||
|
if(self->vk.vkCreateSemaphore(vv->act_dev, &sem_info, NULL, &self->gl_ready_semaphore) == VK_SUCCESS) {
|
||||||
|
VkSemaphoreGetFdInfoKHR get_fd_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_SEMAPHORE_GET_FD_INFO_KHR,
|
||||||
|
.semaphore = self->gl_ready_semaphore,
|
||||||
|
.handleType = VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD_BIT,
|
||||||
|
};
|
||||||
|
int sem_fd = -1;
|
||||||
|
if(self->vk.vkGetSemaphoreFdKHR(vv->act_dev, &get_fd_info, &sem_fd) == VK_SUCCESS) {
|
||||||
|
self->params.egl->glGenSemaphoresEXT(1, &self->gl_semaphore);
|
||||||
|
self->params.egl->glImportSemaphoreFdEXT(self->gl_semaphore, GL_HANDLE_TYPE_OPAQUE_FD_EXT, sem_fd);
|
||||||
|
} else {
|
||||||
|
self->vk.vkDestroySemaphore(vv->act_dev, self->gl_ready_semaphore, NULL);
|
||||||
|
self->gl_ready_semaphore = VK_NULL_HANDLE;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (vkAllocateMemory(vv->act_dev, &mem_alloc_info, 0, &mem) !=
|
|
||||||
VK_SUCCESS)
|
|
||||||
return VK_NULL_HANDLE;
|
|
||||||
|
|
||||||
fprintf(stderr, "memory: %p\n", (void*)mem);
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
fprintf(stderr, "target surface id: %p, %zu, %zu\n", (void*)target_surface_id->mem[0], target_surface_id->offset[0], target_surface_id->offset[1]);
|
|
||||||
fprintf(stderr, "vkGetMemoryFdKHR: %p\n", (void*)vkGetMemoryFdKHR);
|
|
||||||
|
|
||||||
int fd = 0;
|
|
||||||
VkMemoryGetFdInfoKHR fd_info;
|
|
||||||
memset(&fd_info, 0, sizeof(fd_info));
|
|
||||||
fd_info.sType = VK_STRUCTURE_TYPE_MEMORY_GET_FD_INFO_KHR;
|
|
||||||
fd_info.memory = target_surface_id->mem[0];
|
|
||||||
fd_info.handleType = VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT;
|
|
||||||
if(vkGetMemoryFdKHR(vv->act_dev, &fd_info, &fd) != VK_SUCCESS) {
|
|
||||||
fprintf(stderr, "failed!\n");
|
|
||||||
} else {
|
|
||||||
fprintf(stderr, "fd: %d\n", fd);
|
|
||||||
}
|
|
||||||
|
|
||||||
fprintf(stderr, "glImportMemoryFdEXT: %p, size: %zu\n", (void*)self->params.egl->glImportMemoryFdEXT, target_surface_id->size[0]);
|
|
||||||
const int tiling = target_surface_id->tiling == VK_IMAGE_TILING_LINEAR ? GL_LINEAR_TILING_EXT : GL_OPTIMAL_TILING_EXT;
|
|
||||||
|
|
||||||
if(tiling != GL_OPTIMAL_TILING_EXT) {
|
|
||||||
fprintf(stderr, "tiling %d is not supported, only GL_OPTIMAL_TILING_EXT (%d) is supported\n", tiling, GL_OPTIMAL_TILING_EXT);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
unsigned int gl_memory_obj = 0;
|
|
||||||
self->params.egl->glCreateMemoryObjectsEXT(1, &gl_memory_obj);
|
|
||||||
|
|
||||||
//const int dedicated = GL_TRUE;
|
|
||||||
//self->params.egl->glMemoryObjectParameterivEXT(gl_memory_obj, GL_DEDICATED_MEMORY_OBJECT_EXT, &dedicated);
|
|
||||||
|
|
||||||
self->params.egl->glImportMemoryFdEXT(gl_memory_obj, target_surface_id->size[0], GL_HANDLE_TYPE_OPAQUE_FD_EXT, fd);
|
|
||||||
if(!self->params.egl->glIsMemoryObjectEXT(gl_memory_obj))
|
|
||||||
fprintf(stderr, "failed to create object!\n");
|
|
||||||
|
|
||||||
fprintf(stderr, "gl memory obj: %u, error: %d\n", gl_memory_obj, self->params.egl->glGetError());
|
|
||||||
|
|
||||||
// fprintf(stderr, "0 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
// unsigned int vertex_buffer = 0;
|
|
||||||
// self->params.egl->glGenBuffers(1, &vertex_buffer);
|
|
||||||
// self->params.egl->glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
|
|
||||||
// self->params.egl->glBufferStorageMemEXT(GL_ARRAY_BUFFER, target_surface_id->size[0], gl_memory_obj, target_surface_id->offset[0]);
|
|
||||||
// fprintf(stderr, "1 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
|
|
||||||
// fprintf(stderr, "0 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
// unsigned int buffer = 0;
|
|
||||||
// self->params.egl->glCreateBuffers(1, &buffer);
|
|
||||||
// self->params.egl->glNamedBufferStorageMemEXT(buffer, target_surface_id->size[0], gl_memory_obj, target_surface_id->offset[0]);
|
|
||||||
// fprintf(stderr, "1 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
|
|
||||||
self->params.egl->glGenTextures(1, &self->target_textures[0]);
|
|
||||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->target_textures[0]);
|
|
||||||
|
|
||||||
fprintf(stderr, "1 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_TILING_EXT, tiling);
|
|
||||||
|
|
||||||
fprintf(stderr, "tiling: %d\n", tiling);
|
|
||||||
|
|
||||||
fprintf(stderr, "2 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glTexStorageMem2DEXT(GL_TEXTURE_2D, 1, GL_R8, frame->width, frame->height, gl_memory_obj, target_surface_id->offset[0]);
|
|
||||||
|
|
||||||
fprintf(stderr, "3 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
|
||||||
|
|
||||||
self->params.egl->glGenTextures(1, &self->target_textures[1]);
|
|
||||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, self->target_textures[1]);
|
|
||||||
|
|
||||||
fprintf(stderr, "1 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_TILING_EXT, tiling);
|
|
||||||
|
|
||||||
fprintf(stderr, "tiling: %d\n", tiling);
|
|
||||||
|
|
||||||
fprintf(stderr, "2 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glTexStorageMem2DEXT(GL_TEXTURE_2D, 1, GL_RG8, frame->width/2, frame->height/2, gl_memory_obj, target_surface_id->offset[0] + luma_size);
|
|
||||||
|
|
||||||
fprintf(stderr, "3 gl error: %d\n", self->params.egl->glGetError());
|
|
||||||
self->params.egl->glBindTexture(GL_TEXTURE_2D, 0);
|
|
||||||
|
|
||||||
self->texture_sizes[0] = (vec2i){ frame->width, frame->height };
|
|
||||||
self->texture_sizes[1] = (vec2i){ frame->width/2, frame->height/2 };
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -263,11 +488,332 @@ static bool gsr_video_encoder_vulkan_start(gsr_video_encoder *encoder, AVCodecCo
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void gsr_video_encoder_vulkan_copy_textures_to_frame(gsr_video_encoder *encoder, AVFrame *frame, gsr_color_conversion *color_conversion) {
|
||||||
|
(void)color_conversion;
|
||||||
|
gsr_video_encoder_vulkan *self = encoder->priv;
|
||||||
|
AVVkFrame *vk_frame = (AVVkFrame*)frame->data[0];
|
||||||
|
|
||||||
|
/* Wait for the previous frame's copy to finish before reusing the command buffer */
|
||||||
|
// if(self->fence_submitted) {
|
||||||
|
// self->vk.vkWaitForFences(self->vk_device, 1, &self->fence, VK_TRUE, UINT64_MAX);
|
||||||
|
self->vk.vkResetFences(self->vk_device, 1, &self->fence);
|
||||||
|
self->fence_submitted = false;
|
||||||
|
// }
|
||||||
|
|
||||||
|
if(self->gl_ready_semaphore != VK_NULL_HANDLE && self->params.egl->glSignalSemaphoreEXT) {
|
||||||
|
/* GPU-side GL→Vulkan sync: signal the semaphore from GL, then flush (no CPU stall) */
|
||||||
|
unsigned int gl_textures[2] = { self->target_textures[0], self->target_textures[1] };
|
||||||
|
unsigned int dst_layouts[2] = { GL_LAYOUT_GENERAL_EXT, GL_LAYOUT_GENERAL_EXT };
|
||||||
|
self->params.egl->glSignalSemaphoreEXT(self->gl_semaphore, 0, NULL, 2, gl_textures, dst_layouts);
|
||||||
|
self->params.egl->glFlush();
|
||||||
|
} else {
|
||||||
|
/* Fallback: CPU stall to ensure GL has finished */
|
||||||
|
self->params.egl->glFinish();
|
||||||
|
}
|
||||||
|
|
||||||
|
self->vk.vkResetCommandBuffer(self->command_buffer, 0);
|
||||||
|
|
||||||
|
VkCommandBufferBeginInfo begin_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO,
|
||||||
|
.flags = VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT,
|
||||||
|
};
|
||||||
|
self->vk.vkBeginCommandBuffer(self->command_buffer, &begin_info);
|
||||||
|
|
||||||
|
/* Transition export images: GENERAL → TRANSFER_SRC_OPTIMAL */
|
||||||
|
VkImageMemoryBarrier src_barriers[2];
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
src_barriers[i] = (VkImageMemoryBarrier){
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_TRANSFER_READ_BIT,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = self->export_images[i],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 2, src_barriers);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Detect whether the encoder frame uses one multi-plane image (default NV12/P010
|
||||||
|
* in FFmpeg) or two separate single-plane images.
|
||||||
|
* Multi-plane: img[1] == VK_NULL_HANDLE or img[1] == img[0]
|
||||||
|
*/
|
||||||
|
const bool multiplane = (vk_frame->img[1] == VK_NULL_HANDLE || vk_frame->img[1] == vk_frame->img[0]);
|
||||||
|
|
||||||
|
if(multiplane) {
|
||||||
|
/* Transition the encoder's multi-plane image to TRANSFER_DST_OPTIMAL */
|
||||||
|
VkImageMemoryBarrier dst_barrier = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.oldLayout = vk_frame->layout[0],
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[0],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_PLANE_0_BIT | VK_IMAGE_ASPECT_PLANE_1_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 1, &dst_barrier);
|
||||||
|
|
||||||
|
/* Copy Y plane: export_images[0] (R8/R16) → encoder img PLANE_0 */
|
||||||
|
VkImageCopy copy_y = {
|
||||||
|
.srcSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 },
|
||||||
|
.dstSubresource = { VK_IMAGE_ASPECT_PLANE_0_BIT, 0, 0, 1 },
|
||||||
|
.extent = { (uint32_t)frame->width, (uint32_t)frame->height, 1 },
|
||||||
|
};
|
||||||
|
self->vk.vkCmdCopyImage(self->command_buffer,
|
||||||
|
self->export_images[0], VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
|
||||||
|
vk_frame->img[0], VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
1, ©_y);
|
||||||
|
|
||||||
|
/* Copy UV plane: export_images[1] (RG8/RG16) → encoder img PLANE_1 */
|
||||||
|
VkImageCopy copy_uv = {
|
||||||
|
.srcSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 },
|
||||||
|
.dstSubresource = { VK_IMAGE_ASPECT_PLANE_1_BIT, 0, 0, 1 },
|
||||||
|
.extent = { (uint32_t)frame->width / 2, (uint32_t)frame->height / 2, 1 },
|
||||||
|
};
|
||||||
|
self->vk.vkCmdCopyImage(self->command_buffer,
|
||||||
|
self->export_images[1], VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
|
||||||
|
vk_frame->img[0], VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
1, ©_uv);
|
||||||
|
|
||||||
|
/* Transition encoder image to GENERAL and update tracked layout */
|
||||||
|
VkImageMemoryBarrier dst_barrier_back = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[0],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_PLANE_0_BIT | VK_IMAGE_ASPECT_PLANE_1_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 1, &dst_barrier_back);
|
||||||
|
|
||||||
|
vk_frame->layout[0] = VK_IMAGE_LAYOUT_GENERAL;
|
||||||
|
vk_frame->layout[1] = VK_IMAGE_LAYOUT_GENERAL;
|
||||||
|
} else {
|
||||||
|
/* Two separate single-plane images */
|
||||||
|
VkImageMemoryBarrier dst_barriers[2] = {
|
||||||
|
{
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.oldLayout = vk_frame->layout[0],
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[0],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.oldLayout = vk_frame->layout[1],
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[1],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_ALL_COMMANDS_BIT, VK_PIPELINE_STAGE_TRANSFER_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 2, dst_barriers);
|
||||||
|
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
VkImageCopy copy = {
|
||||||
|
.srcSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 },
|
||||||
|
.dstSubresource = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 0, 1 },
|
||||||
|
.extent = {
|
||||||
|
(uint32_t)(i == 0 ? frame->width : frame->width / 2),
|
||||||
|
(uint32_t)(i == 0 ? frame->height : frame->height / 2),
|
||||||
|
1
|
||||||
|
},
|
||||||
|
};
|
||||||
|
self->vk.vkCmdCopyImage(self->command_buffer,
|
||||||
|
self->export_images[i], VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
|
||||||
|
vk_frame->img[i], VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
1, ©);
|
||||||
|
}
|
||||||
|
|
||||||
|
VkImageMemoryBarrier dst_barriers_back[2] = {
|
||||||
|
{
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[0],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_TRANSFER_WRITE_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_MEMORY_READ_BIT | VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = vk_frame->img[1],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 2, dst_barriers_back);
|
||||||
|
|
||||||
|
vk_frame->layout[0] = VK_IMAGE_LAYOUT_GENERAL;
|
||||||
|
vk_frame->layout[1] = VK_IMAGE_LAYOUT_GENERAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Transition export images back: TRANSFER_SRC_OPTIMAL → GENERAL for next GL frame */
|
||||||
|
VkImageMemoryBarrier src_barriers_back[2];
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
src_barriers_back[i] = (VkImageMemoryBarrier){
|
||||||
|
.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER,
|
||||||
|
.srcAccessMask = VK_ACCESS_TRANSFER_READ_BIT,
|
||||||
|
.dstAccessMask = VK_ACCESS_MEMORY_WRITE_BIT,
|
||||||
|
.oldLayout = VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL,
|
||||||
|
.newLayout = VK_IMAGE_LAYOUT_GENERAL,
|
||||||
|
.srcQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.dstQueueFamilyIndex = VK_QUEUE_FAMILY_IGNORED,
|
||||||
|
.image = self->export_images[i],
|
||||||
|
.subresourceRange = {
|
||||||
|
.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT,
|
||||||
|
.levelCount = 1,
|
||||||
|
.layerCount = 1,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
self->vk.vkCmdPipelineBarrier(self->command_buffer,
|
||||||
|
VK_PIPELINE_STAGE_TRANSFER_BIT, VK_PIPELINE_STAGE_ALL_COMMANDS_BIT,
|
||||||
|
0, 0, NULL, 0, NULL, 2, src_barriers_back);
|
||||||
|
|
||||||
|
self->vk.vkEndCommandBuffer(self->command_buffer);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Detect whether the encoder frame is multiplane to know how many timeline
|
||||||
|
* semaphores need to be signaled.
|
||||||
|
*/
|
||||||
|
const bool mp = (vk_frame->img[1] == VK_NULL_HANDLE || vk_frame->img[1] == vk_frame->img[0]);
|
||||||
|
const int num_sems = mp ? 1 : 2;
|
||||||
|
|
||||||
|
if(self->gl_ready_semaphore != VK_NULL_HANDLE) {
|
||||||
|
/*
|
||||||
|
* GPU-side sync path:
|
||||||
|
* - Wait on the GL binary semaphore before executing the copy.
|
||||||
|
* - Signal each AVVkFrame timeline semaphore so FFmpeg knows the frame is ready.
|
||||||
|
*/
|
||||||
|
uint64_t signal_values[2];
|
||||||
|
VkSemaphore signal_sems[2];
|
||||||
|
for(int i = 0; i < num_sems; i++) {
|
||||||
|
signal_values[i] = vk_frame->sem_value[i] + 1;
|
||||||
|
signal_sems[i] = vk_frame->sem[i];
|
||||||
|
}
|
||||||
|
|
||||||
|
VkTimelineSemaphoreSubmitInfo timeline_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_TIMELINE_SEMAPHORE_SUBMIT_INFO,
|
||||||
|
.signalSemaphoreValueCount = (uint32_t)num_sems,
|
||||||
|
.pSignalSemaphoreValues = signal_values,
|
||||||
|
};
|
||||||
|
|
||||||
|
const VkPipelineStageFlags wait_stage = VK_PIPELINE_STAGE_ALL_COMMANDS_BIT;
|
||||||
|
VkSubmitInfo submit_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO,
|
||||||
|
.pNext = &timeline_info,
|
||||||
|
.waitSemaphoreCount = 1,
|
||||||
|
.pWaitSemaphores = &self->gl_ready_semaphore,
|
||||||
|
.pWaitDstStageMask = &wait_stage,
|
||||||
|
.commandBufferCount = 1,
|
||||||
|
.pCommandBuffers = &self->command_buffer,
|
||||||
|
.signalSemaphoreCount = (uint32_t)num_sems,
|
||||||
|
.pSignalSemaphores = signal_sems,
|
||||||
|
};
|
||||||
|
self->vk.vkQueueSubmit(self->vk_queue, 1, &submit_info, self->fence);
|
||||||
|
|
||||||
|
for(int i = 0; i < num_sems; i++)
|
||||||
|
vk_frame->sem_value[i]++;
|
||||||
|
} else {
|
||||||
|
/* Fallback: plain submit, we already stalled via glFinish() */
|
||||||
|
VkSubmitInfo submit_info = {
|
||||||
|
.sType = VK_STRUCTURE_TYPE_SUBMIT_INFO,
|
||||||
|
.commandBufferCount = 1,
|
||||||
|
.pCommandBuffers = &self->command_buffer,
|
||||||
|
};
|
||||||
|
self->vk.vkQueueSubmit(self->vk_queue, 1, &submit_info, self->fence);
|
||||||
|
}
|
||||||
|
|
||||||
|
self->fence_submitted = true;
|
||||||
|
}
|
||||||
|
|
||||||
void gsr_video_encoder_vulkan_stop(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context) {
|
void gsr_video_encoder_vulkan_stop(gsr_video_encoder_vulkan *self, AVCodecContext *video_codec_context) {
|
||||||
self->params.egl->glDeleteTextures(2, self->target_textures);
|
self->params.egl->glDeleteTextures(2, self->target_textures);
|
||||||
self->target_textures[0] = 0;
|
self->target_textures[0] = 0;
|
||||||
self->target_textures[1] = 0;
|
self->target_textures[1] = 0;
|
||||||
|
|
||||||
|
if(self->vk_device) {
|
||||||
|
/* Drain any in-flight copy before freeing resources */
|
||||||
|
if(self->fence_submitted) {
|
||||||
|
self->vk.vkWaitForFences(self->vk_device, 1, &self->fence, VK_TRUE, UINT64_MAX);
|
||||||
|
self->fence_submitted = false;
|
||||||
|
}
|
||||||
|
if(self->gl_ready_semaphore)
|
||||||
|
self->vk.vkDestroySemaphore(self->vk_device, self->gl_ready_semaphore, NULL);
|
||||||
|
if(self->fence)
|
||||||
|
self->vk.vkDestroyFence(self->vk_device, self->fence, NULL);
|
||||||
|
/* Destroying the command pool also frees the command buffer */
|
||||||
|
if(self->command_pool)
|
||||||
|
self->vk.vkDestroyCommandPool(self->vk_device, self->command_pool, NULL);
|
||||||
|
for(int i = 0; i < 2; i++) {
|
||||||
|
if(self->export_images[i])
|
||||||
|
self->vk.vkDestroyImage(self->vk_device, self->export_images[i], NULL);
|
||||||
|
if(self->export_memory[i])
|
||||||
|
self->vk.vkFreeMemory(self->vk_device, self->export_memory[i], NULL);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if(video_codec_context->hw_frames_ctx)
|
if(video_codec_context->hw_frames_ctx)
|
||||||
av_buffer_unref(&video_codec_context->hw_frames_ctx);
|
av_buffer_unref(&video_codec_context->hw_frames_ctx);
|
||||||
if(self->device_ctx)
|
if(self->device_ctx)
|
||||||
@@ -305,7 +851,7 @@ gsr_video_encoder* gsr_video_encoder_vulkan_create(const gsr_video_encoder_vulka
|
|||||||
|
|
||||||
*encoder = (gsr_video_encoder) {
|
*encoder = (gsr_video_encoder) {
|
||||||
.start = gsr_video_encoder_vulkan_start,
|
.start = gsr_video_encoder_vulkan_start,
|
||||||
.copy_textures_to_frame = NULL,
|
.copy_textures_to_frame = gsr_video_encoder_vulkan_copy_textures_to_frame,
|
||||||
.get_textures = gsr_video_encoder_vulkan_get_textures,
|
.get_textures = gsr_video_encoder_vulkan_get_textures,
|
||||||
.destroy = gsr_video_encoder_vulkan_destroy,
|
.destroy = gsr_video_encoder_vulkan_destroy,
|
||||||
.priv = encoder_vulkan
|
.priv = encoder_vulkan
|
||||||
|
|||||||
371
src/main.cpp
371
src/main.cpp
@@ -425,7 +425,7 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt, const A
|
|||||||
if (codec_context->codec_id == AV_CODEC_ID_MPEG1VIDEO)
|
if (codec_context->codec_id == AV_CODEC_ID_MPEG1VIDEO)
|
||||||
codec_context->mb_decision = 2;
|
codec_context->mb_decision = 2;
|
||||||
|
|
||||||
if(!use_software_video_encoder && egl.gpu_info.vendor != GSR_GPU_VENDOR_NVIDIA && arg_parser.bitrate_mode != GSR_BITRATE_MODE_CBR) {
|
if(!use_software_video_encoder && (egl.gpu_info.vendor != GSR_GPU_VENDOR_NVIDIA || video_codec_is_vulkan(arg_parser.video_codec)) && arg_parser.bitrate_mode != GSR_BITRATE_MODE_CBR) {
|
||||||
// 8 bit / 10 bit = 80%, and increase it even more
|
// 8 bit / 10 bit = 80%, and increase it even more
|
||||||
const float quality_multiply = hdr ? (8.0f/10.0f * 0.7f) : 1.0f;
|
const float quality_multiply = hdr ? (8.0f/10.0f * 0.7f) : 1.0f;
|
||||||
if(codec_context->codec_id == AV_CODEC_ID_AV1 || codec_context->codec_id == AV_CODEC_ID_H264 || codec_context->codec_id == AV_CODEC_ID_HEVC) {
|
if(codec_context->codec_id == AV_CODEC_ID_AV1 || codec_context->codec_id == AV_CODEC_ID_H264 || codec_context->codec_id == AV_CODEC_ID_HEVC) {
|
||||||
@@ -479,7 +479,7 @@ static AVCodecContext *create_video_codec_context(AVPixelFormat pix_fmt, const A
|
|||||||
av_opt_set_int(codec_context->priv_data, "b_ref_mode", 0, 0);
|
av_opt_set_int(codec_context->priv_data, "b_ref_mode", 0, 0);
|
||||||
//av_opt_set_int(codec_context->priv_data, "cbr", true, 0);
|
//av_opt_set_int(codec_context->priv_data, "cbr", true, 0);
|
||||||
|
|
||||||
if(egl.gpu_info.vendor != GSR_GPU_VENDOR_NVIDIA) {
|
if(egl.gpu_info.vendor != GSR_GPU_VENDOR_NVIDIA || video_codec_is_vulkan(arg_parser.video_codec)) {
|
||||||
// TODO: More options, better options
|
// TODO: More options, better options
|
||||||
//codec_context->bit_rate = codec_context->width * codec_context->height;
|
//codec_context->bit_rate = codec_context->width * codec_context->height;
|
||||||
switch(arg_parser.bitrate_mode) {
|
switch(arg_parser.bitrate_mode) {
|
||||||
@@ -614,51 +614,19 @@ static void dict_set_profile(AVCodecContext *codec_context, gsr_gpu_vendor vendo
|
|||||||
static void video_software_set_qp(AVCodecContext *codec_context, gsr_video_quality video_quality, bool hdr, AVDictionary **options) {
|
static void video_software_set_qp(AVCodecContext *codec_context, gsr_video_quality video_quality, bool hdr, AVDictionary **options) {
|
||||||
// 8 bit / 10 bit = 80%
|
// 8 bit / 10 bit = 80%
|
||||||
const float qp_multiply = hdr ? 8.0f/10.0f : 1.0f;
|
const float qp_multiply = hdr ? 8.0f/10.0f : 1.0f;
|
||||||
if(codec_context->codec_id == AV_CODEC_ID_AV1) {
|
switch(video_quality) {
|
||||||
switch(video_quality) {
|
case GSR_VIDEO_QUALITY_MEDIUM:
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
break;
|
||||||
break;
|
case GSR_VIDEO_QUALITY_HIGH:
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
break;
|
||||||
break;
|
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
break;
|
||||||
break;
|
case GSR_VIDEO_QUALITY_ULTRA:
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
break;
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_H264) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 34 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -723,118 +691,19 @@ static void video_set_rc(gsr_video_codec video_codec, gsr_gpu_vendor vendor, gsr
|
|||||||
static void video_hardware_set_qp(AVCodecContext *codec_context, gsr_video_quality video_quality, gsr_gpu_vendor vendor, bool hdr, AVDictionary **options) {
|
static void video_hardware_set_qp(AVCodecContext *codec_context, gsr_video_quality video_quality, gsr_gpu_vendor vendor, bool hdr, AVDictionary **options) {
|
||||||
// 8 bit / 10 bit = 80%
|
// 8 bit / 10 bit = 80%
|
||||||
const float qp_multiply = hdr ? 8.0f/10.0f : 1.0f;
|
const float qp_multiply = hdr ? 8.0f/10.0f : 1.0f;
|
||||||
if(vendor == GSR_GPU_VENDOR_NVIDIA) {
|
switch(video_quality) {
|
||||||
// TODO: Test if these should be in the same range as vaapi
|
case GSR_VIDEO_QUALITY_MEDIUM:
|
||||||
if(codec_context->codec_id == AV_CODEC_ID_AV1) {
|
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
||||||
switch(video_quality) {
|
break;
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
case GSR_VIDEO_QUALITY_HIGH:
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
||||||
break;
|
break;
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
||||||
break;
|
break;
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
case GSR_VIDEO_QUALITY_ULTRA:
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
||||||
break;
|
break;
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_H264) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_HEVC) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_VP8 || codec_context->codec_id == AV_CODEC_ID_VP9) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if(codec_context->codec_id == AV_CODEC_ID_AV1) {
|
|
||||||
// Using global_quality option
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_H264) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_HEVC) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
} else if(codec_context->codec_id == AV_CODEC_ID_VP8 || codec_context->codec_id == AV_CODEC_ID_VP9) {
|
|
||||||
switch(video_quality) {
|
|
||||||
case GSR_VIDEO_QUALITY_MEDIUM:
|
|
||||||
av_dict_set_int(options, "qp", 35 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 30 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_VERY_HIGH:
|
|
||||||
av_dict_set_int(options, "qp", 25 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
case GSR_VIDEO_QUALITY_ULTRA:
|
|
||||||
av_dict_set_int(options, "qp", 22 * qp_multiply, 0);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -854,9 +723,19 @@ static void open_video_hardware(AVCodecContext *codec_context, bool low_power, c
|
|||||||
|
|
||||||
if(video_codec_is_vulkan(arg_parser.video_codec)) {
|
if(video_codec_is_vulkan(arg_parser.video_codec)) {
|
||||||
av_dict_set_int(&options, "async_depth", 3, 0);
|
av_dict_set_int(&options, "async_depth", 3, 0);
|
||||||
av_dict_set(&options, "tune", "hq", 0);
|
av_dict_set(&options, "tune", "ll", 0); // Low latency
|
||||||
av_dict_set(&options, "usage", "record", 0); // TODO: Set to stream when streaming
|
av_dict_set(&options, "usage", arg_parser.is_livestream ? "stream" : "record", 0);
|
||||||
av_dict_set(&options, "content", "rendered", 0);
|
av_dict_set(&options, "content", "rendered", 0); // Game or 3D content
|
||||||
|
|
||||||
|
if(codec_context->codec_id == AV_CODEC_ID_H264) {
|
||||||
|
// Removed because it causes stutter in games for some people
|
||||||
|
//av_dict_set_int(&options, "quality", 5, 0); // quality preset
|
||||||
|
} else if(codec_context->codec_id == AV_CODEC_ID_AV1) {
|
||||||
|
av_dict_set(&options, "tier", "main", 0);
|
||||||
|
} else if(codec_context->codec_id == AV_CODEC_ID_HEVC) {
|
||||||
|
if(hdr)
|
||||||
|
av_dict_set(&options, "sei", "hdr", 0);
|
||||||
|
}
|
||||||
} else if(egl.gpu_info.vendor == GSR_GPU_VENDOR_NVIDIA) {
|
} else if(egl.gpu_info.vendor == GSR_GPU_VENDOR_NVIDIA) {
|
||||||
// TODO: These dont seem to be necessary
|
// TODO: These dont seem to be necessary
|
||||||
// av_dict_set_int(&options, "zerolatency", 1, 0);
|
// av_dict_set_int(&options, "zerolatency", 1, 0);
|
||||||
@@ -866,7 +745,7 @@ static void open_video_hardware(AVCodecContext *codec_context, bool low_power, c
|
|||||||
// av_dict_set(&options, "preset", "llhq", 0);
|
// av_dict_set(&options, "preset", "llhq", 0);
|
||||||
// av_dict_set(&options, "tune", "ll", 0);
|
// av_dict_set(&options, "tune", "ll", 0);
|
||||||
// }
|
// }
|
||||||
av_dict_set(&options, "tune", "hq", 0);
|
av_dict_set(&options, "tune", "ll", 0);
|
||||||
|
|
||||||
switch(arg_parser.tune) {
|
switch(arg_parser.tune) {
|
||||||
case GSR_TUNE_PERFORMANCE:
|
case GSR_TUNE_PERFORMANCE:
|
||||||
@@ -1674,7 +1553,7 @@ static bool get_supported_video_codecs(gsr_egl *egl, gsr_video_codec video_codec
|
|||||||
}
|
}
|
||||||
|
|
||||||
if(video_codec_is_vulkan(video_codec))
|
if(video_codec_is_vulkan(video_codec))
|
||||||
return gsr_get_supported_video_codecs_vulkan(video_codecs, egl->card_path, cleanup);
|
return gsr_get_supported_video_codecs_vulkan(video_codecs, egl->card_path, &egl->vulkan_device_index, cleanup);
|
||||||
|
|
||||||
switch(egl->gpu_info.vendor) {
|
switch(egl->gpu_info.vendor) {
|
||||||
case GSR_GPU_VENDOR_AMD:
|
case GSR_GPU_VENDOR_AMD:
|
||||||
@@ -1779,7 +1658,13 @@ static const AVCodec* get_ffmpeg_video_codec(gsr_video_codec video_codec, gsr_gp
|
|||||||
case GSR_VIDEO_CODEC_H264_VULKAN:
|
case GSR_VIDEO_CODEC_H264_VULKAN:
|
||||||
return avcodec_find_encoder_by_name("h264_vulkan");
|
return avcodec_find_encoder_by_name("h264_vulkan");
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN:
|
||||||
return avcodec_find_encoder_by_name("hevc_vulkan");
|
return avcodec_find_encoder_by_name("hevc_vulkan");
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN:
|
||||||
|
return avcodec_find_encoder_by_name("av1_vulkan");
|
||||||
}
|
}
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
@@ -1819,6 +1704,12 @@ static void set_supported_video_codecs_ffmpeg(gsr_supported_video_codecs *suppor
|
|||||||
supported_video_codecs_vulkan->hevc_hdr.supported = false;
|
supported_video_codecs_vulkan->hevc_hdr.supported = false;
|
||||||
supported_video_codecs_vulkan->hevc_10bit.supported = false;
|
supported_video_codecs_vulkan->hevc_10bit.supported = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if(!get_ffmpeg_video_codec(GSR_VIDEO_CODEC_AV1_VULKAN, vendor)) {
|
||||||
|
supported_video_codecs_vulkan->av1.supported = false;
|
||||||
|
supported_video_codecs_vulkan->av1_hdr.supported = false;
|
||||||
|
supported_video_codecs_vulkan->av1_10bit.supported = false;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1852,10 +1743,20 @@ static void list_supported_video_codecs(gsr_egl *egl, bool wayland) {
|
|||||||
puts("vp8");
|
puts("vp8");
|
||||||
if(supported_video_codecs.vp9.supported)
|
if(supported_video_codecs.vp9.supported)
|
||||||
puts("vp9");
|
puts("vp9");
|
||||||
//if(supported_video_codecs_vulkan.h264.supported)
|
if(supported_video_codecs_vulkan.h264.supported)
|
||||||
// puts("h264_vulkan");
|
puts("h264_vulkan");
|
||||||
//if(supported_video_codecs_vulkan.hevc.supported)
|
if(supported_video_codecs_vulkan.hevc.supported)
|
||||||
// puts("hevc_vulkan"); // TODO: hdr, 10 bit
|
puts("hevc_vulkan");
|
||||||
|
if(supported_video_codecs_vulkan.hevc_hdr.supported && wayland)
|
||||||
|
puts("hevc_hdr_vulkan");
|
||||||
|
if(supported_video_codecs_vulkan.hevc_10bit.supported)
|
||||||
|
puts("hevc_10bit_vulkan");
|
||||||
|
if(supported_video_codecs_vulkan.av1.supported)
|
||||||
|
puts("av1_vulkan");
|
||||||
|
if(supported_video_codecs_vulkan.av1_hdr.supported && wayland)
|
||||||
|
puts("av1_hdr_vulkan");
|
||||||
|
if(supported_video_codecs_vulkan.av1_10bit.supported)
|
||||||
|
puts("av1_10bit_vulkan");
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool monitor_capture_use_drm(const gsr_window *window, gsr_gpu_vendor vendor) {
|
static bool monitor_capture_use_drm(const gsr_window *window, gsr_gpu_vendor vendor) {
|
||||||
@@ -3086,56 +2987,68 @@ static gsr_audio_codec select_audio_codec_with_fallback(gsr_audio_codec audio_co
|
|||||||
|
|
||||||
static bool video_codec_only_supports_low_power_mode(const gsr_supported_video_codecs &supported_video_codecs, gsr_video_codec video_codec) {
|
static bool video_codec_only_supports_low_power_mode(const gsr_supported_video_codecs &supported_video_codecs, gsr_video_codec video_codec) {
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_H264: return supported_video_codecs.h264.low_power;
|
case GSR_VIDEO_CODEC_H264: return supported_video_codecs.h264.low_power;
|
||||||
case GSR_VIDEO_CODEC_HEVC: return supported_video_codecs.hevc.low_power;
|
case GSR_VIDEO_CODEC_HEVC: return supported_video_codecs.hevc.low_power;
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR: return supported_video_codecs.hevc_hdr.low_power;
|
case GSR_VIDEO_CODEC_HEVC_HDR: return supported_video_codecs.hevc_hdr.low_power;
|
||||||
case GSR_VIDEO_CODEC_HEVC_10BIT: return supported_video_codecs.hevc_10bit.low_power;
|
case GSR_VIDEO_CODEC_HEVC_10BIT: return supported_video_codecs.hevc_10bit.low_power;
|
||||||
case GSR_VIDEO_CODEC_AV1: return supported_video_codecs.av1.low_power;
|
case GSR_VIDEO_CODEC_AV1: return supported_video_codecs.av1.low_power;
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR: return supported_video_codecs.av1_hdr.low_power;
|
case GSR_VIDEO_CODEC_AV1_HDR: return supported_video_codecs.av1_hdr.low_power;
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT: return supported_video_codecs.av1_10bit.low_power;
|
case GSR_VIDEO_CODEC_AV1_10BIT: return supported_video_codecs.av1_10bit.low_power;
|
||||||
case GSR_VIDEO_CODEC_VP8: return supported_video_codecs.vp8.low_power;
|
case GSR_VIDEO_CODEC_VP8: return supported_video_codecs.vp8.low_power;
|
||||||
case GSR_VIDEO_CODEC_VP9: return supported_video_codecs.vp9.low_power;
|
case GSR_VIDEO_CODEC_VP9: return supported_video_codecs.vp9.low_power;
|
||||||
case GSR_VIDEO_CODEC_H264_VULKAN: return supported_video_codecs.h264.low_power;
|
case GSR_VIDEO_CODEC_H264_VULKAN: return supported_video_codecs.h264.low_power;
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN: return supported_video_codecs.hevc.low_power; // TODO: hdr, 10 bit
|
case GSR_VIDEO_CODEC_HEVC_VULKAN: return supported_video_codecs.hevc.low_power;
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN: return supported_video_codecs.hevc_hdr.low_power;
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN: return supported_video_codecs.hevc_10bit.low_power;
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN: return supported_video_codecs.av1.low_power;
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN: return supported_video_codecs.av1_hdr.low_power;
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN: return supported_video_codecs.av1_10bit.low_power;
|
||||||
}
|
}
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const AVCodec* get_av_codec_if_supported(gsr_video_codec video_codec, gsr_egl *egl, bool use_software_video_encoder, const gsr_supported_video_codecs *supported_video_codecs) {
|
static const AVCodec* get_av_codec_if_supported(gsr_video_codec video_codec, gsr_egl *egl, bool use_software_video_encoder, const gsr_supported_video_codecs *supported_video_codecs) {
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_H264: {
|
case GSR_VIDEO_CODEC_H264:
|
||||||
|
case GSR_VIDEO_CODEC_H264_VULKAN: {
|
||||||
if(use_software_video_encoder)
|
if(use_software_video_encoder)
|
||||||
return avcodec_find_encoder_by_name("libx264");
|
return avcodec_find_encoder_by_name("libx264");
|
||||||
else if(supported_video_codecs->h264.supported)
|
else if(supported_video_codecs->h264.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC: {
|
case GSR_VIDEO_CODEC_HEVC:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
|
||||||
if(supported_video_codecs->hevc.supported)
|
if(supported_video_codecs->hevc.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR: {
|
case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN: {
|
||||||
if(supported_video_codecs->hevc_hdr.supported)
|
if(supported_video_codecs->hevc_hdr.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC_10BIT: {
|
case GSR_VIDEO_CODEC_HEVC_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN: {
|
||||||
if(supported_video_codecs->hevc_10bit.supported)
|
if(supported_video_codecs->hevc_10bit.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1: {
|
case GSR_VIDEO_CODEC_AV1:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN: {
|
||||||
if(supported_video_codecs->av1.supported)
|
if(supported_video_codecs->av1.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR: {
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN: {
|
||||||
if(supported_video_codecs->av1_hdr.supported)
|
if(supported_video_codecs->av1_hdr.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT: {
|
case GSR_VIDEO_CODEC_AV1_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN: {
|
||||||
if(supported_video_codecs->av1_10bit.supported)
|
if(supported_video_codecs->av1_10bit.supported)
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
@@ -3150,56 +3063,52 @@ static const AVCodec* get_av_codec_if_supported(gsr_video_codec video_codec, gsr
|
|||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_H264_VULKAN: {
|
|
||||||
if(supported_video_codecs->h264.supported)
|
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
|
|
||||||
// TODO: hdr, 10 bit
|
|
||||||
if(supported_video_codecs->hevc.supported)
|
|
||||||
return get_ffmpeg_video_codec(video_codec, egl->gpu_info.vendor);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
static vec2i codec_get_max_resolution(gsr_video_codec video_codec, bool use_software_video_encoder, const gsr_supported_video_codecs *supported_video_codecs) {
|
static vec2i codec_get_max_resolution(gsr_video_codec video_codec, bool use_software_video_encoder, const gsr_supported_video_codecs *supported_video_codecs) {
|
||||||
switch(video_codec) {
|
switch(video_codec) {
|
||||||
case GSR_VIDEO_CODEC_H264: {
|
case GSR_VIDEO_CODEC_H264:
|
||||||
|
case GSR_VIDEO_CODEC_H264_VULKAN: {
|
||||||
if(use_software_video_encoder)
|
if(use_software_video_encoder)
|
||||||
return {4096, 2304};
|
return {4096, 2304};
|
||||||
else if(supported_video_codecs->h264.supported)
|
else if(supported_video_codecs->h264.supported)
|
||||||
return supported_video_codecs->h264.max_resolution;
|
return supported_video_codecs->h264.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC: {
|
case GSR_VIDEO_CODEC_HEVC:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
|
||||||
if(supported_video_codecs->hevc.supported)
|
if(supported_video_codecs->hevc.supported)
|
||||||
return supported_video_codecs->hevc.max_resolution;
|
return supported_video_codecs->hevc.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC_HDR: {
|
case GSR_VIDEO_CODEC_HEVC_HDR:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN: {
|
||||||
if(supported_video_codecs->hevc_hdr.supported)
|
if(supported_video_codecs->hevc_hdr.supported)
|
||||||
return supported_video_codecs->hevc_hdr.max_resolution;
|
return supported_video_codecs->hevc_hdr.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC_10BIT: {
|
case GSR_VIDEO_CODEC_HEVC_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN: {
|
||||||
if(supported_video_codecs->hevc_10bit.supported)
|
if(supported_video_codecs->hevc_10bit.supported)
|
||||||
return supported_video_codecs->hevc_10bit.max_resolution;
|
return supported_video_codecs->hevc_10bit.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1: {
|
case GSR_VIDEO_CODEC_AV1:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN: {
|
||||||
if(supported_video_codecs->av1.supported)
|
if(supported_video_codecs->av1.supported)
|
||||||
return supported_video_codecs->av1.max_resolution;
|
return supported_video_codecs->av1.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1_HDR: {
|
case GSR_VIDEO_CODEC_AV1_HDR:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN: {
|
||||||
if(supported_video_codecs->av1_hdr.supported)
|
if(supported_video_codecs->av1_hdr.supported)
|
||||||
return supported_video_codecs->av1_hdr.max_resolution;
|
return supported_video_codecs->av1_hdr.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_AV1_10BIT: {
|
case GSR_VIDEO_CODEC_AV1_10BIT:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN: {
|
||||||
if(supported_video_codecs->av1_10bit.supported)
|
if(supported_video_codecs->av1_10bit.supported)
|
||||||
return supported_video_codecs->av1_10bit.max_resolution;
|
return supported_video_codecs->av1_10bit.max_resolution;
|
||||||
break;
|
break;
|
||||||
@@ -3214,17 +3123,6 @@ static vec2i codec_get_max_resolution(gsr_video_codec video_codec, bool use_soft
|
|||||||
return supported_video_codecs->vp9.max_resolution;
|
return supported_video_codecs->vp9.max_resolution;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_H264_VULKAN: {
|
|
||||||
if(supported_video_codecs->h264.supported)
|
|
||||||
return supported_video_codecs->h264.max_resolution;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
|
|
||||||
// TODO: hdr, 10 bit
|
|
||||||
if(supported_video_codecs->hevc.supported)
|
|
||||||
return supported_video_codecs->hevc.max_resolution;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return {0, 0};
|
return {0, 0};
|
||||||
}
|
}
|
||||||
@@ -3303,7 +3201,9 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
|
|||||||
}
|
}
|
||||||
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
||||||
}
|
}
|
||||||
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
|
case GSR_VIDEO_CODEC_HEVC_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_HEVC_10BIT_VULKAN: {
|
||||||
fprintf(stderr, "gsr warning: selected video codec hevc_vulkan is not supported by your hardware, trying hevc instead\n");
|
fprintf(stderr, "gsr warning: selected video codec hevc_vulkan is not supported by your hardware, trying hevc instead\n");
|
||||||
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
|
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
|
||||||
// Need to do a query again because this time it's without vulkan
|
// Need to do a query again because this time it's without vulkan
|
||||||
@@ -3314,6 +3214,19 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
|
|||||||
}
|
}
|
||||||
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
||||||
}
|
}
|
||||||
|
case GSR_VIDEO_CODEC_AV1_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_HDR_VULKAN:
|
||||||
|
case GSR_VIDEO_CODEC_AV1_10BIT_VULKAN: {
|
||||||
|
fprintf(stderr, "gsr warning: selected video codec av1_vulkan is not supported by your hardware, trying av1 instead\n");
|
||||||
|
args_parser->video_codec = GSR_VIDEO_CODEC_AV1;
|
||||||
|
// Need to do a query again because this time it's without vulkan
|
||||||
|
if(!get_supported_video_codecs(egl, args_parser->video_codec, false, true, supported_video_codecs)) {
|
||||||
|
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
|
||||||
|
print_codec_error(args_parser->video_codec);
|
||||||
|
_exit(11);
|
||||||
|
}
|
||||||
|
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
video_codec_f = get_av_codec_if_supported(args_parser->video_codec, egl, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, supported_video_codecs);
|
video_codec_f = get_av_codec_if_supported(args_parser->video_codec, egl, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, supported_video_codecs);
|
||||||
@@ -3350,10 +3263,15 @@ static gsr_video_codec select_appropriate_video_codec_automatically(vec2i video_
|
|||||||
}
|
}
|
||||||
|
|
||||||
static const AVCodec* select_video_codec_with_fallback(vec2i video_size, args_parser *args_parser, const char *file_extension, gsr_egl *egl, bool *low_power) {
|
static const AVCodec* select_video_codec_with_fallback(vec2i video_size, args_parser *args_parser, const char *file_extension, gsr_egl *egl, bool *low_power) {
|
||||||
gsr_supported_video_codecs supported_video_codecs;
|
gsr_supported_video_codecs supported_video_codecs_non_vulkan;
|
||||||
get_supported_video_codecs(egl, args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, true, &supported_video_codecs);
|
get_supported_video_codecs(egl, args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, true, &supported_video_codecs_non_vulkan);
|
||||||
// TODO: Use gsr_supported_video_codecs *supported_video_codecs_vulkan here to properly query vulkan video support
|
|
||||||
set_supported_video_codecs_ffmpeg(&supported_video_codecs, nullptr, egl->gpu_info.vendor);
|
gsr_supported_video_codecs supported_video_codecs_vulkan = supported_video_codecs_non_vulkan;
|
||||||
|
set_supported_video_codecs_ffmpeg(&supported_video_codecs_non_vulkan, &supported_video_codecs_vulkan, egl->gpu_info.vendor);
|
||||||
|
|
||||||
|
gsr_supported_video_codecs *supported_video_codecs = video_codec_is_vulkan(args_parser->video_codec)
|
||||||
|
? &supported_video_codecs_vulkan
|
||||||
|
: &supported_video_codecs_non_vulkan;
|
||||||
|
|
||||||
const bool video_codec_auto = args_parser->video_codec == (gsr_video_codec)GSR_VIDEO_CODEC_AUTO;
|
const bool video_codec_auto = args_parser->video_codec == (gsr_video_codec)GSR_VIDEO_CODEC_AUTO;
|
||||||
if(video_codec_auto) {
|
if(video_codec_auto) {
|
||||||
@@ -3364,7 +3282,7 @@ static const AVCodec* select_video_codec_with_fallback(vec2i video_size, args_pa
|
|||||||
fprintf(stderr, "gsr info: using h264 encoder because a codec was not specified\n");
|
fprintf(stderr, "gsr info: using h264 encoder because a codec was not specified\n");
|
||||||
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
||||||
} else if(args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU) {
|
} else if(args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU) {
|
||||||
args_parser->video_codec = select_appropriate_video_codec_automatically(video_size, &supported_video_codecs);
|
args_parser->video_codec = select_appropriate_video_codec_automatically(video_size, &supported_video_codecs_non_vulkan);
|
||||||
if(args_parser->video_codec == (gsr_video_codec)-1) {
|
if(args_parser->video_codec == (gsr_video_codec)-1) {
|
||||||
if(args_parser->fallback_cpu_encoding) {
|
if(args_parser->fallback_cpu_encoding) {
|
||||||
fprintf(stderr, "gsr warning: gpu encoding is not available on your system or your gpu doesn't support recording at the resolution you are trying to record, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
|
fprintf(stderr, "gsr warning: gpu encoding is not available on your system or your gpu doesn't support recording at the resolution you are trying to record, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
|
||||||
@@ -3390,9 +3308,9 @@ static const AVCodec* select_video_codec_with_fallback(vec2i video_size, args_pa
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const AVCodec *codec = pick_video_codec(egl, args_parser, true, low_power, &supported_video_codecs);
|
const AVCodec *codec = pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
|
||||||
|
|
||||||
const vec2i codec_max_resolution = codec_get_max_resolution(args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, &supported_video_codecs);
|
const vec2i codec_max_resolution = codec_get_max_resolution(args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, supported_video_codecs);
|
||||||
if(!codec_supports_resolution(codec_max_resolution, video_size)) {
|
if(!codec_supports_resolution(codec_max_resolution, video_size)) {
|
||||||
const char *video_codec_name = video_codec_to_string(args_parser->video_codec);
|
const char *video_codec_name = video_codec_to_string(args_parser->video_codec);
|
||||||
fprintf(stderr, "gsr error: The max resolution for video codec %s is %dx%d while you are trying to capture at resolution %dx%d. Change capture resolution or video codec and try again\n",
|
fprintf(stderr, "gsr error: The max resolution for video codec %s is %dx%d while you are trying to capture at resolution %dx%d. Change capture resolution or video codec and try again\n",
|
||||||
@@ -3714,11 +3632,16 @@ int main(int argc, char **argv) {
|
|||||||
int driver_major_version = 0;
|
int driver_major_version = 0;
|
||||||
int driver_minor_version = 0;
|
int driver_minor_version = 0;
|
||||||
if(get_nvidia_driver_version(&driver_major_version, &driver_minor_version) && (driver_major_version > 580 || (driver_major_version == 580 && driver_minor_version >= 105))) {
|
if(get_nvidia_driver_version(&driver_major_version, &driver_minor_version) && (driver_major_version > 580 || (driver_major_version == 580 && driver_minor_version >= 105))) {
|
||||||
fprintf(stderr, "gsr info: overclocking was set by has been forcefully disabled since your gpu supports CUDA_DISABLE_PERF_BOOST to workaround driver issue (overclocking is not needed)\n");
|
fprintf(stderr, "gsr info: overclocking was set but has been forcefully disabled since your gpu supports CUDA_DISABLE_PERF_BOOST to workaround driver issue (overclocking is not needed)\n");
|
||||||
arg_parser.overclock = false;
|
arg_parser.overclock = false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if(arg_parser.overclock && video_codec_is_vulkan(arg_parser.video_codec)) {
|
||||||
|
fprintf(stderr, "gsr info: overclocking was set but has been forcefully disabled since you're using vulkan video encoder which doesn't suffer from cuda p2 power level issues\n");
|
||||||
|
arg_parser.overclock = false;
|
||||||
|
}
|
||||||
|
|
||||||
//av_log_set_level(AV_LOG_TRACE);
|
//av_log_set_level(AV_LOG_TRACE);
|
||||||
|
|
||||||
const Arg *audio_input_arg = args_parser_get_arg(&arg_parser, "-a");
|
const Arg *audio_input_arg = args_parser_get_arg(&arg_parser, "-a");
|
||||||
|
|||||||
Reference in New Issue
Block a user