Compare commits

...

31 Commits

Author SHA1 Message Date
dec05eba
4363f8b2b0 5.10.2 2025-12-08 02:53:43 +01:00
dec05eba
5906a0c06f nvfbc: fix scaled monitor capture not working correctly 2025-12-08 02:53:29 +01:00
dec05eba
2c53638bb0 Fix high cpu usage when not receiving audio 2025-12-08 02:22:23 +01:00
dec05eba
80c5566d40 5.10.0 2025-12-07 18:13:57 +01:00
dec05eba
3ac17b99a0 App audio capture: remove gsr-app-sink
Connect application/device audio directly to gsr recording node.
This fixes an issue for some users where gsr-app-sink got selected by
default as an output device.

Dont wait until audio node first receives audio before recording audio
from the device. This might fix audio/video desync issue when recording
from microphone for example.
2025-12-07 18:10:18 +01:00
dec05eba
2064d109ee Pipewire audio: set media role to production (hoping for lower latency) 2025-12-07 13:44:11 +01:00
dec05eba
cedf3ae7d7 Use the audio argument as the name of the audio track 2025-12-05 00:18:26 +01:00
dec05eba
af941f602b 5.9.4 2025-12-04 20:15:44 +01:00
dec05eba
c1614e4f30 nvfbc: mitigate x11 display leak on monitor off
When the monitor is turned off gsr will attempt to recreate the nvfbc
session once a second. If that fails (because the monitor is still
turned off) then nvfbc will leak an x11 display. This seems to be a bug
in the nvidia driver.

Mitigate this by only attempting to recreate the nvfbc session if the
capture target monitor can be found (predicting if nvfbc recreate will
fail).

Thanks to Lim Ding Wen for finding the issue and suggesting the
mitigation.
2025-12-04 20:06:47 +01:00
dec05eba
f00dec683e Mention flickering cursor 2025-12-01 02:17:46 +01:00
dec05eba
11930c355f Update README 2025-11-26 18:27:49 +01:00
dec05eba
716dc0b736 5.9.3 2025-11-25 19:02:58 +01:00
dec05eba
059e3dbbc0 pipewire video: check if has modifier 2025-11-24 21:26:02 +01:00
dec05eba
990dfc7589 pipewire video: re-negotiate modifiers multiple times until we get a good one 2025-11-24 21:15:35 +01:00
dec05eba
2d551e7b1f Proper fallback for vulkan video 2025-11-23 14:40:54 +01:00
dec05eba
72c548c19e Fix -fallback-cpu-encoding not working properly 2025-11-23 01:36:00 +01:00
dec05eba
7123081e53 README DnD 2025-11-19 23:26:45 +01:00
dec05eba
eba5f3f43d Allow hevc/av1 flv for recent ffmpeg 2025-11-19 20:26:37 +01:00
dec05eba
76dd8004e9 5.9.1 2025-11-19 02:30:41 +01:00
dec05eba
02c7a0bcce Fix region capture not always working on wayland if monitor is rotated (incorrect region detected) 2025-11-19 02:29:56 +01:00
dec05eba
82177cd742 m 2025-11-18 20:42:27 +01:00
dec05eba
84e3b911d9 Update README with info about amd performance and move sections 2025-11-18 20:39:14 +01:00
dec05eba
f9ab3ebd65 5.9.0 2025-11-18 11:41:12 +01:00
dec05eba
ab5988a2bb Dont scale image frame to padding in video 2025-11-18 02:52:11 +01:00
dec05eba
95c6fc84ea x11: fix monitor region incorrect when capturing a rotated monitor connected to an external gpu 2025-11-18 01:06:37 +01:00
dec05eba
a10b882e82 Fix shit 2025-11-18 00:28:13 +01:00
dec05eba
92f4bd5f95 kms: fix incorrect capture region on intel when playing a fullscreen game with a lower resolution (on some wayland compositors) 2025-11-17 23:54:09 +01:00
dec05eba
cc43ca0336 Scale video/image to output size instead of adding black bars or no scaling 2025-11-17 23:19:26 +01:00
dec05eba
a3e1b2a896 comment 2025-11-16 20:47:07 +01:00
dec05eba
80f0e483a4 screenshot: improve jpeg very high quality to 91 (enables yuv444 instead of yuv420) 2025-11-16 18:58:57 +01:00
dec05eba
739fd9cd72 Replay: attempt to fix audio desync when using multiple audio tracks 2025-11-16 17:42:30 +01:00
23 changed files with 346 additions and 367 deletions

View File

@@ -102,11 +102,8 @@ You can see a list of capture options to record if you run `gpu-screen-recorder
```
in this case you could record a window or a monitor with the name `DP-1`.\
To list available audio devices that you can use you can run `gpu-screen-recorder --list-audio-devices` and the name to use is on the left size of the `|`.\
To list available audio application names that you can use you can run `gpu-screen-recorder --list-application-audio`.
## Streaming
Streaming works the same way as recording, but the `-o` argument should be path to the live streaming service you want to use (including your live streaming key). Take a look at `scripts/twitch-stream.sh` to see an example of how to stream to twitch.\
GPU Screen Recorder uses Ffmpeg so GPU Screen Recorder supports all protocols that Ffmpeg supports.\
If you want to reduce latency one thing you can do is to use the `-keyint` option, for example `-keyint 0.5`. Lower value means lower latency at the cost of increased bitrate/decreased quality.
To list available audio application names that you can use you can run `gpu-screen-recorder --list-application-audio`.\
You can run `gpu-screen-recorder --info` to list more information about the system, such as the device that is used for capture and video encoding and supported codecs. These commands can be parsed by scripts/programs.
## Replay mode
Run `gpu-screen-recorder` with the `-c mp4` and `-r` option, for example: `gpu-screen-recorder -w screen -f 60 -r 30 -c mp4 -o ~/Videos`. Note that in this case, `-o` should point to a directory.\
If `-df yes` is set, replays are save in folders based on the date.
@@ -117,9 +114,13 @@ This can be used for example to show a notification when a replay has been saved
The replay buffer is stored in ram (as encoded video) by default, so don't use a too large replay time and/or video quality unless you have enough ram to store it.\
You can use the `-replay-storage disk` option to store the replay buffer on disk instead of ram (in the same location as the output video).\
By default videos are recorded with constant quality, but with replay mode you might want to record in constant bitrate mode instead for consistent ram/disk usage in high motion scenes. You can do that by using the `-bm cbr` option (along with `-q` option, for example `-bm cbr -q 20000`).
## Streaming
Streaming works the same way as recording, but the `-o` argument should be path to the live streaming service you want to use (including your live streaming key). Take a look at `scripts/twitch-stream.sh` to see an example of how to stream to twitch.\
GPU Screen Recorder uses Ffmpeg so GPU Screen Recorder supports all protocols that Ffmpeg supports.\
If you want to reduce latency one thing you can do is to use the `-keyint` option, for example `-keyint 0.5`. Lower value means lower latency at the cost of increased bitrate/decreased quality.
## Recording while using replay/streaming
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording.\
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f gpu-screen-recorder`. The name of the video will be displayed in stdout when saving the video.\
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording (for example: `gpu-screen-recorder -w screen -c mp4 -r 60 -o "$HOME/Videos/replays" -ro "$HOME/Videos/recordings"`).\
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f gpu-screen-recorder`. The path to the video will be displayed in stdout when saving the video.\
This way of recording while using replay/streaming is more efficient than running GPU Screen Recorder multiple times since this way it only records the screen and encodes the video once.
## Controlling GPU Screen Recorder remotely
To save a video in replay mode, you need to send signal SIGUSR1 to gpu screen recorder. You can do this by running `pkill -SIGUSR1 -f gpu-screen-recorder`.\
@@ -150,12 +151,28 @@ If you record at your monitors refresh rate and enabled vsync in a game then the
This is an issue in some games.
If you experience this issue then you might want to either disable vsync in the game or use the `-fm content` option to sync capture to the content on the screen. For example: `gpu-screen-recorder -w screen -fm content -o video.mp4`.\
Note that this option is currently only available on X11, or with desktop portal capture on Wayland (`-w portal`).
# Performance
On a system with an i5 4690k CPU and a GTX 1080 GPU:\
When recording Legend of Zelda Breath of the Wild at 4k, fps drops from 30 to 7 when using OBS Studio + nvenc, however when using this screen recorder the fps remains at 30.\
When recording GTA V at 4k on highest settings, fps drops from 60 to 23 when using obs-nvfbc + nvenc, however when using this screen recorder the fps only drops to 58.\
On a system with an AMD Ryzen 9 5900X CPU and an RX 7800XT GPU I dont see any fps drop at all, even when recording at 4k 60fps with AV1 codec with 10-bit colors.\
GPU Screen Recorder also produces much smoother videos than OBS when GPU utilization is close to 100%, see comparison here: [https://www.youtube.com/watch?v=zfj4sNVLLLg](https://www.youtube.com/watch?v=zfj4sNVLLLg) and [https://www.youtube.com/watch?v=aK67RSZw2ZQ](https://www.youtube.com/watch?v=aK67RSZw2ZQ).\
GPU Screen Recorder has much better performance than OBS Studio even with version 30.2 that does "zero-copy" recording and encoding, see: [https://www.youtube.com/watch?v=jdroRjibsDw](https://www.youtube.com/watch?v=jdroRjibsDw).\
It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle. Using variable framerate mode (-fm vfr) which is the default is also recommended as this reduces encoding load. Ultra quality is also overkill most of the time, very high (the default) or lower quality is usually enough.\
Note that for best performance you should close other screen recorders such as OBS Studio when using GPU Screen Recorder even if they are not recording, since they can affect performance even when idle. This is the case with OBS Studio.
## Note about optimal performance on NVIDIA
NVIDIA driver has a "feature" (read: bug) where it will downclock memory transfer rate when a program uses cuda (or nvenc, which uses cuda), such as GPU Screen Recorder. To work around this bug, GPU Screen Recorder can overclock your GPU memory transfer rate to it's normal optimal level.\
To enable overclocking for optimal performance use the `-oc` option when running GPU Screen Recorder. You also need to have "Coolbits" NVIDIA X setting set to "12" to enable overclocking. You can automatically add this option if you run `sudo nvidia-xconfig --cool-bits=12` and then reboot your computer.\
Note that this only works when Xorg server is running as root, and using this option will only give you a performance boost if the game you are recording is bottlenecked by your GPU.\
Note! use at your own risk!
# Issues
## NVIDIA
Nvidia drivers have an issue where CUDA breaks if CUDA is running when suspend/hibernation happens, and it remains broken until you reload the nvidia driver. `extra/gsr-nvidia.conf` will be installed by default when you install GPU Screen Recorder and that should fix this issue. If this doesn't fix the issue for you then your distro may use a different path for modprobe files. In that case you have to install that `extra/gsr-nvidia.conf` yourself into that location.
You have to reboot your computer after installing GPU Screen Recorder for the first time for the fix to have any effect.
# TEMPORARY ISSUES
## TEMPORARY ISSUES
1) Videos are in variable framerate format. Use MPV to play such videos, otherwise you might experience stuttering in the video if you are using a buggy video player. You can try saving the video into a .mkv file instead as some software may have better support for .mkv files (such as kdenlive). You can use the "-fm cfr" option to to use constant framerate mode.
2) FLAC audio codec is disabled at the moment because of temporary issues.
@@ -167,19 +184,7 @@ When recording a window or when using the `-w portal` option no special user per
however when recording a monitor the program needs root permission (to access KMS).\
This is safe in GPU Screen Recorder as the part that needs root access has been moved to its own small program that only does one thing.\
For you as a user this only means that if you installed GPU Screen Recorder as a flatpak then a prompt asking for root password will show up once when you start recording.
# Performance
On a system with a i5 4690k CPU and a GTX 1080 GPU:\
When recording Legend of Zelda Breath of the Wild at 4k, fps drops from 30 to 7 when using OBS Studio + nvenc, however when using this screen recorder the fps remains at 30.\
When recording GTA V at 4k on highest settings, fps drops from 60 to 23 when using obs-nvfbc + nvenc, however when using this screen recorder the fps only drops to 58.\
GPU Screen Recorder also produces much smoother videos than OBS when GPU utilization is close to 100%, see comparison here: [https://www.youtube.com/watch?v=zfj4sNVLLLg](https://www.youtube.com/watch?v=zfj4sNVLLLg) and [https://www.youtube.com/watch?v=aK67RSZw2ZQ](https://www.youtube.com/watch?v=aK67RSZw2ZQ).\
GPU Screen Recorder has much better performance than OBS Studio even with version 30.2 that does "zero-copy" recording and encoding, see: [https://www.youtube.com/watch?v=jdroRjibsDw](https://www.youtube.com/watch?v=jdroRjibsDw).\
It is recommended to save the video to a SSD because of the large file size, which a slow HDD might not be fast enough to handle. Using variable framerate mode (-fm vfr) which is the default is also recommended as this reduces encoding load. Ultra quality is also overkill most of the time, very high (the default) or lower quality is usually enough.\
Note that for best performance you should close other screen recorders such as OBS Studio when using GPU Screen Recorder even if they are not recording, since they can affect performance even when idle. This is the case with OBS Studio.
## Note about optimal performance on NVIDIA
NVIDIA driver has a "feature" (read: bug) where it will downclock memory transfer rate when a program uses cuda (or nvenc, which uses cuda), such as GPU Screen Recorder. To work around this bug, GPU Screen Recorder can overclock your GPU memory transfer rate to it's normal optimal level.\
To enable overclocking for optimal performance use the `-oc` option when running GPU Screen Recorder. You also need to have "Coolbits" NVIDIA X setting set to "12" to enable overclocking. You can automatically add this option if you run `sudo nvidia-xconfig --cool-bits=12` and then reboot your computer.\
Note that this only works when Xorg server is running as root, and using this option will only give you a performance boost if the game you are recording is bottlenecked by your GPU.\
Note! use at your own risk!
# VRR/G-SYNC
This should work fine on AMD/Intel X11 or Wayland. On Nvidia X11 G-SYNC only works with the -w screen-direct option, but because of bugs in the Nvidia driver this option is not always recommended.
For example it can cause your computer to freeze when recording certain games.
@@ -235,3 +240,12 @@ then GPU Screen Recorder will automatically use that same GPU for recording and
This is a bug in kde plasma wayland. When using desktop portal capture and the monitor is rotated and a window is made fullscreen kde plasma wayland will give incorrect rotation to GPU Screen Recorder.
This also affects other screen recording software such as obs studio.\
Capture a monitor directly instead to workaround this issue until kde plasma devs fix it, or use another wayland compositor that doesn't have this issue.
## System notifications get disabled when recording with desktop portal option
Some desktop environments such as KDE Plasma turn off notifications when you record the screen with the desktop portal option. You can disable this by going into KDE Plasma settings -> search for notifications and then under "Do Not Disturb mode" untick "During screen sharing".
## The recorded video lags or I get dropped frames in the video
This is likely not an issue in the recorded video itself, but the video player you use. GPU Screen Recorder doesn't record by dropping frames. Some video players dont play videos with hardware acceleration by default,
especially if you record with HEVC/AV1 video codec. In such cases it's recommended to play the video with mpv instead with hardware acceleration enabled (for example: `mpv --vo=gpu --hwdec=auto video.mp4`).
Some corporate distros such as Fedora (or some Fedora based distros) also disable hardware accelerated video codecs on AMD/Intel GPUs, so you might need to install mpv (or another video player) with flathub instead, which bypasses this restriction.
## My cursor is flickering in the recorded video
This is likely an AMD gpu driver issue. It only happens to certain generations of AMD GPUs. On Wayland you can record with the desktop portal option (`-w portal`) to workaround this issue.
This issue hasn't been observed on X11 yet, but if you do observe it you can either record a window (`-w $(xdotool selectwindow)`) or change your xorg config to use software cursor instead (Add `Option "SWcursor" "true"` under modesetting "Device" section in your xorg config file).

32
TODO
View File

@@ -61,8 +61,6 @@ Remove follow focused option.
Exit if X11/Wayland killed (if drm plane dead or something?)
Use SRC_W and SRC_H for screen plane instead of crtc_w and crtc_h.
Test if p2 state can be worked around by using pure nvenc api and overwriting cuInit/cuCtxCreate* to not do anything. Cuda might be loaded when using nvenc but it might not be used, with certain record options? (such as h264 p5).
nvenc uses cuda when using b frames and rgb->yuv conversion, so convert the image ourselves instead.-
@@ -81,8 +79,6 @@ When vulkan encode is added, mention minimum nvidia driver required. (550.54.14?
Investigate if there is a way to do gpu->gpu copy directly without touching system ram to enable video encoding on a different gpu. On nvidia this is possible with cudaMemcpyPeer, but how about from an intel/amd gpu to an nvidia gpu or the other way around or any combination of iGPU and dedicated GPU?
Maybe something with clEnqueueMigrateMemObjects? on AMD something with DirectGMA maybe?
Go back to using pure vaapi without opengl for video encoding? rotation (transpose) can be done if its done after (rgb to yuv) color conversion.
Use lanczos resampling for better scaling quality. Lanczos resampling can also be used for YUV chroma for better color quality on small text.
Flac is disabled because the frame sizes are too large which causes big audio/video desync.
@@ -199,8 +195,6 @@ Always disable prime run/dri prime and list all monitors to record from from all
Allow flv av1 if recent ffmpeg version and streaming to youtube (and twitch?) and for custom services.
Use explicit sync in pipewire video code: https://docs.pipewire.org/page_dma_buf.html.
Support vaapi rotation. Support for it is added in mesa here: https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/32919.
Replay (and recording?) fails to save properly sometimes (especially for long videos). This is noticable with mp4 files since they get corrupt and become unplayable.
The entire video does seem to get saved (it's a large video file) and it seems to have the correct headers but it's not playable.
@@ -300,9 +294,6 @@ Disable GL_DEPTH_TEST, GL_CULL_FACE.
kde plasma portal capture for screenshot doesn't work well because the portal ui is still visible when taking a screenshot because of its animation.
It's possible for microphone audio to get desynced when recording together with desktop audio, when not recording app audio as well.
Test recording desktop audio and microphone audio together (-a "default_output|default_input") for around 30 minutes.
We can use dri2connect/dri3open to get the /dev/dri/card device. Note that this doesn't work on nvidia x11.
Add support for QVBR (QP with target bitrate). Maybe use VBR instead, since nvidia doesn't support QVBR and neither does vulkan.
@@ -320,10 +311,6 @@ Set top level window argument for portal capture. Same for gpu-screen-recorder-g
Remove unix domain socket code from kms-client/server and use socketpair directly. To make this possible always execute the kms server permission setup in flatpak, before starting recording (in gpu-screen-recorder-gtk).
Application audio capture isn't good enough. It creates a sink that for some automatically gets selected as the default output device and it's visible as an output device.
Fix some of these issues by setting gsr-app-sink media class to "Stream/Input/Audio" and node.virtual=true.
However that causes pulseaudio to be unable to record from gsr-app-sink, and it ends up being stuck in pa_sound_device_handle_reconnect in the loop with pa_mainloop_iterate.
Add -k best/best_hdr/best_10bit option, to automatically choose the best codec (prefer av1, then hevc and then h264. For webm files choose vp9 and then vp8).
Check if region capture works properly with fractional scaling on wayland.
@@ -349,4 +336,21 @@ Support pausing recording when recording while replay/streaming.
Maybe use VK_VALVE_video_encode_rgb_conversion with vulkan encoding for shader-less rgb to yuv conversion. That would allow screen capture with no gpu processing.
Cursor sometimes doesn't have color when capturing region scaled (on kde plasma wayland at least).
Cursor sometimes doesn't have color when capturing region scaled (on kde plasma wayland at least).
Remove drm_monitor_get_display_server_data and do that work in the drm monitor query.
In gpu screen recorder --info output codec max resolutions. This allows for better error messages in ui frontends and easier and better error handling.
Set minimum fps for live stream or piping or always.
Support youtube sso.
Remove -fm content (support it but remove it from documentation and output deprecation notice when its used) and use it when using -fm vbr (which is the default option).
But first -fm content needs to be support on wayland as well, by checking if there is a difference between frames (checksum the frame content).
-fm content also needs to have a minimum fps to prevent live stream from timing out when nothing changes on the screen.
There is a leak in nvfbc. When a monitor is turned off and then on there will be an x11 display leak inside nvfbc. This seems to be a bug in nvfbc.
Right now a mitigation has been added to not try to recreate the nvfbc session if the capture target (monitor) isn't connected (predict if nvfbc session create will fail).
One possible reason this happens is because bExternallyManagedContext is set to true.
This also means that nvfbc leaks connection when destroying nvfbc, even if the monitor is connected (this is not an issue right now because exit is done, but if gsr was turned into a library it would be).

View File

@@ -14,13 +14,17 @@ typedef struct AVContentLightMetadata AVContentLightMetadata;
typedef struct gsr_capture gsr_capture;
typedef struct {
int width;
int height;
// Width and height of the video
int video_width;
int video_height;
// Width and height of the frame at the start of capture, the target size
int recording_width;
int recording_height;
int fps;
} gsr_capture_metadata;
struct gsr_capture {
/* These methods should not be called manually. Call gsr_capture_* instead. |capture_metdata->width| and |capture_metadata->height| should be set by this function */
/* These methods should not be called manually. Call gsr_capture_* instead. |capture_metadata->video_width| and |capture_metadata->video_height| should be set by this function */
int (*start)(gsr_capture *cap, gsr_capture_metadata *capture_metadata);
void (*on_event)(gsr_capture *cap, gsr_egl *egl); /* can be NULL */
void (*tick)(gsr_capture *cap); /* can be NULL. If there is an event then |on_event| is called before this */

View File

@@ -10,23 +10,14 @@ typedef enum {
GSR_IMAGE_FORMAT_PNG
} gsr_image_format;
typedef enum {
GSR_IMAGE_WRITER_SOURCE_OPENGL,
GSR_IMAGE_WRITER_SOURCE_MEMORY
} gsr_image_writer_source;
typedef struct {
gsr_image_writer_source source;
gsr_egl *egl;
int width;
int height;
unsigned int texture;
const void *memory; /* Reference */
} gsr_image_writer;
bool gsr_image_writer_init_opengl(gsr_image_writer *self, gsr_egl *egl, int width, int height);
/* |memory| is taken as a reference. The data is expected to be in rgba8 format (8 bit rgba) */
bool gsr_image_writer_init_memory(gsr_image_writer *self, const void *memory, int width, int height);
void gsr_image_writer_deinit(gsr_image_writer *self);
/* Quality is between 1 and 100 where 100 is the max quality. Quality doesn't apply to lossless formats */

View File

@@ -94,18 +94,12 @@ typedef struct {
size_t num_requested_links;
size_t requested_links_capacity_items;
struct pw_proxy **virtual_sink_proxies;
size_t num_virtual_sink_proxies;
size_t virtual_sink_proxies_capacity_items;
bool running;
} gsr_pipewire_audio;
bool gsr_pipewire_audio_init(gsr_pipewire_audio *self);
void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self);
bool gsr_pipewire_audio_create_virtual_sink(gsr_pipewire_audio *self, const char *name);
/*
This function links audio source outputs from applications that match the name |app_names| to the input
that matches the name |stream_name_input|.
@@ -140,6 +134,17 @@ bool gsr_pipewire_audio_add_link_from_apps_to_sink(gsr_pipewire_audio *self, con
*/
bool gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(gsr_pipewire_audio *self, const char **app_names, int num_app_names, const char *sink_name_input);
/*
This function links audio source outputs from devices that match the name |source_names| to the input
that matches the name |stream_name_input|.
If a device or a new device starts outputting audio after this function is called and the device name matches
then it will automatically link the audio sources.
|source_names| and |stream_name_input| are case-insensitive matches.
|source_names| can include "default_output" or "default_input" to use the default output/input
and it will automatically switch when the default output/input is changed in system audio settings.
*/
bool gsr_pipewire_audio_add_link_from_sources_to_stream(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *stream_name_input);
/*
This function links audio source outputs from devices that match the name |source_names| to the input
that matches the name |sink_name_input|.

View File

@@ -79,8 +79,8 @@ typedef struct {
struct spa_video_info format;
int server_version_sync;
bool negotiated;
bool renegotiated;
bool damaged;
bool has_modifier;
struct {
bool visible;

View File

@@ -61,12 +61,12 @@ typedef enum {
/*
Get a sound device by name, returning the device into the |device| parameter.
|device_name| can be a device name or "default_output" or "default_input".
|device_name| can be a device name or "default_output", "default_input" or "" to not connect to any device (used for app audio for example).
If the device name is "default_output" or "default_input" then it will automatically switch which
device is records from when the default output/input is changed in the system audio settings.
Returns 0 on success, or a negative value on failure.
*/
int sound_device_get_by_name(SoundDevice *device, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format);
int sound_device_get_by_name(SoundDevice *device, const char *node_name, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format);
void sound_device_close(SoundDevice *device);

View File

@@ -59,8 +59,8 @@ struct gsr_kms_response_item {
gsr_kms_rotation rotation;
int x;
int y;
int crtc_w;
int crtc_h;
int src_w;
int src_h;
struct hdr_output_metadata hdr_metadata;
};

View File

@@ -141,21 +141,21 @@ typedef enum {
PLANE_PROPERTY_Y = 1 << 1,
PLANE_PROPERTY_SRC_X = 1 << 2,
PLANE_PROPERTY_SRC_Y = 1 << 3,
PLANE_PROPERTY_CRTC_W = 1 << 4,
PLANE_PROPERTY_CRTC_H = 1 << 5,
PLANE_PROPERTY_SRC_W = 1 << 4,
PLANE_PROPERTY_SRC_H = 1 << 5,
PLANE_PROPERTY_IS_CURSOR = 1 << 6,
PLANE_PROPERTY_IS_PRIMARY = 1 << 7,
PLANE_PROPERTY_ROTATION = 1 << 8,
} plane_property_mask;
/* Returns plane_property_mask */
static uint32_t plane_get_properties(int drmfd, uint32_t plane_id, int *x, int *y, int *src_x, int *src_y, int *crtc_w, int *crtc_h, gsr_kms_rotation *rotation) {
static uint32_t plane_get_properties(int drmfd, uint32_t plane_id, int *x, int *y, int *src_x, int *src_y, int *src_w, int *src_h, gsr_kms_rotation *rotation) {
*x = 0;
*y = 0;
*src_x = 0;
*src_y = 0;
*crtc_w = 0;
*crtc_h = 0;
*src_w = 0;
*src_h = 0;
*rotation = KMS_ROT_0;
plane_property_mask property_mask = 0;
@@ -184,12 +184,12 @@ static uint32_t plane_get_properties(int drmfd, uint32_t plane_id, int *x, int *
} else if((type & DRM_MODE_PROP_RANGE) && strcmp(prop->name, "SRC_Y") == 0) {
*src_y = (int)(props->prop_values[i] >> 16);
property_mask |= PLANE_PROPERTY_SRC_Y;
} else if((type & DRM_MODE_PROP_RANGE) && strcmp(prop->name, "CRTC_W") == 0) {
*crtc_w = props->prop_values[i];
property_mask |= PLANE_PROPERTY_CRTC_W;
} else if((type & DRM_MODE_PROP_RANGE) && strcmp(prop->name, "CRTC_H") == 0) {
*crtc_h = props->prop_values[i];
property_mask |= PLANE_PROPERTY_CRTC_H;
} else if((type & DRM_MODE_PROP_RANGE) && strcmp(prop->name, "SRC_W") == 0) {
*src_w = (int)(props->prop_values[i] >> 16);
property_mask |= PLANE_PROPERTY_SRC_W;
} else if((type & DRM_MODE_PROP_RANGE) && strcmp(prop->name, "SRC_H") == 0) {
*src_h = (int)(props->prop_values[i] >> 16);
property_mask |= PLANE_PROPERTY_SRC_H;
} else if((type & DRM_MODE_PROP_ENUM) && strcmp(prop->name, "type") == 0) {
const uint64_t current_enum_value = props->prop_values[i];
for(int j = 0; j < prop->count_enums; ++j) {
@@ -351,9 +351,9 @@ static int kms_get_fb(gsr_drm *drm, gsr_kms_response *response) {
// TODO: Check if dimensions have changed by comparing width and height to previous time this was called.
// TODO: Support other plane formats than rgb (with multiple planes, such as direct YUV420 on wayland).
int x = 0, y = 0, src_x = 0, src_y = 0, crtc_w = 0, crtc_h = 0;
int x = 0, y = 0, src_x = 0, src_y = 0, src_w = 0, src_h = 0;
gsr_kms_rotation rotation = KMS_ROT_0;
const uint32_t property_mask = plane_get_properties(drm->drmfd, plane->plane_id, &x, &y, &src_x, &src_y, &crtc_w, &crtc_h, &rotation);
const uint32_t property_mask = plane_get_properties(drm->drmfd, plane->plane_id, &x, &y, &src_x, &src_y, &src_w, &src_h, &rotation);
if(!(property_mask & PLANE_PROPERTY_IS_PRIMARY) && !(property_mask & PLANE_PROPERTY_IS_CURSOR))
continue;
@@ -392,13 +392,13 @@ static int kms_get_fb(gsr_drm *drm, gsr_kms_response *response) {
if(property_mask & PLANE_PROPERTY_IS_CURSOR) {
response->items[item_index].x = x;
response->items[item_index].y = y;
response->items[item_index].crtc_w = 0;
response->items[item_index].crtc_h = 0;
response->items[item_index].src_w = 0;
response->items[item_index].src_h = 0;
} else {
response->items[item_index].x = src_x;
response->items[item_index].y = src_y;
response->items[item_index].crtc_w = crtc_w;
response->items[item_index].crtc_h = crtc_h;
response->items[item_index].src_w = src_w;
response->items[item_index].src_h = src_h;
}
++response->num_items;

View File

@@ -1,4 +1,4 @@
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.8.2', default_options : ['warning_level=2'])
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.10.2', default_options : ['warning_level=2'])
add_project_arguments('-Wshadow', language : ['c', 'cpp'])
if get_option('buildtype') == 'debug'

View File

@@ -1,7 +1,7 @@
[package]
name = "gpu-screen-recorder"
type = "executable"
version = "5.8.2"
version = "5.10.2"
platforms = ["posix"]
[config]

View File

@@ -251,7 +251,7 @@ static void usage_full() {
printf(" Run GPU Screen Recorder with the --list-application-audio option to list valid application names. It's possible to use an application name that is not listed in --list-application-audio,\n");
printf(" for example when trying to record audio from an application that hasn't started yet.\n");
printf("\n");
printf(" -q Video quality. Should be either 'medium', 'high', 'very_high' or 'ultra' when using '-bm qp' or '-bm vbr' options, and '-bm qp' is the default option used.\n");
printf(" -q Video/image quality. Should be either 'medium', 'high', 'very_high' or 'ultra' when using '-bm qp' or '-bm vbr' options, and '-bm qp' is the default option used.\n");
printf(" 'high' is the recommended option when live streaming or when you have a slower harddrive.\n");
printf(" When using '-bm cbr' option then this is option is instead used to specify the video bitrate in kbps.\n");
printf(" Optional when using '-bm qp' or '-bm vbr' options, set to 'very_high' be default.\n");

View File

@@ -209,14 +209,14 @@ static int gsr_capture_kms_start(gsr_capture *cap, gsr_capture_metadata *capture
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
capture_metadata->width = self->params.output_resolution.x;
capture_metadata->height = self->params.output_resolution.y;
capture_metadata->video_width = self->params.output_resolution.x;
capture_metadata->video_height = self->params.output_resolution.y;
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
capture_metadata->width = self->params.region_size.x;
capture_metadata->height = self->params.region_size.y;
capture_metadata->video_width = self->params.region_size.x;
capture_metadata->video_height = self->params.region_size.y;
} else {
capture_metadata->width = self->capture_size.x;
capture_metadata->height = self->capture_size.y;
capture_metadata->video_width = self->capture_size.x;
capture_metadata->video_height = self->capture_size.y;
}
self->last_time_monitor_check = clock_get_monotonic_seconds();
@@ -542,7 +542,7 @@ static void render_x11_cursor(gsr_capture_kms *self, gsr_color_conversion *color
}
static void gsr_capture_kms_update_capture_size_change(gsr_capture_kms *self, gsr_color_conversion *color_conversion, vec2i target_pos, const gsr_kms_response_item *drm_fd) {
if(target_pos.x != self->prev_target_pos.x || target_pos.y != self->prev_target_pos.y || drm_fd->crtc_w != self->prev_plane_size.x || drm_fd->crtc_h != self->prev_plane_size.y) {
if(target_pos.x != self->prev_target_pos.x || target_pos.y != self->prev_target_pos.y || drm_fd->src_w != self->prev_plane_size.x || drm_fd->src_h != self->prev_plane_size.y) {
self->prev_target_pos = target_pos;
self->prev_plane_size = self->capture_size;
gsr_color_conversion_clear(color_conversion);
@@ -621,15 +621,12 @@ static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *captu
if(drm_fd->has_hdr_metadata && self->params.hdr && hdr_metadata_is_supported_format(&drm_fd->hdr_metadata))
gsr_kms_set_hdr_metadata(self, drm_fd);
self->capture_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->crtc_w, drm_fd->crtc_h });
self->capture_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->src_w, drm_fd->src_h });
if(self->params.region_size.x > 0 && self->params.region_size.y > 0)
self->capture_size = self->params.region_size;
const bool is_scaled = self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0;
vec2i output_size = is_scaled ? self->params.output_resolution : self->capture_size;
output_size = scale_keep_aspect_ratio(self->capture_size, output_size);
const vec2i target_pos = { max_int(0, capture_metadata->width / 2 - output_size.x / 2), max_int(0, capture_metadata->height / 2 - output_size.y / 2) };
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
gsr_capture_kms_update_capture_size_change(self, color_conversion, target_pos, drm_fd);
vec2i capture_pos = self->capture_pos;
@@ -649,7 +646,7 @@ static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *captu
}
const gsr_monitor_rotation plane_rotation = kms_rotation_to_gsr_monitor_rotation(drm_fd->rotation);
const gsr_monitor_rotation rotation = sub_rotations(self->monitor_rotation, plane_rotation);
const gsr_monitor_rotation rotation = capture_is_combined_plane ? GSR_MONITOR_ROT_0 : sub_rotations(self->monitor_rotation, plane_rotation);
gsr_color_conversion_draw(color_conversion, self->external_texture_fallback ? self->external_input_texture_id : self->input_texture_id,
target_pos, output_size,
@@ -668,7 +665,7 @@ static int gsr_capture_kms_capture(gsr_capture *cap, gsr_capture_metadata *captu
cursor_monitor_offset.y += self->params.region_position.y;
render_x11_cursor(self, color_conversion, cursor_monitor_offset, target_pos, output_size);
} else if(cursor_drm_fd) {
const vec2i framebuffer_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->crtc_w, drm_fd->crtc_h });
const vec2i framebuffer_size = rotate_capture_size_if_rotated(self, (vec2i){ drm_fd->src_w, drm_fd->src_h });
render_drm_cursor(self, color_conversion, cursor_drm_fd, target_pos, output_size, framebuffer_size);
}
}

View File

@@ -13,6 +13,7 @@
#include <assert.h>
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
typedef struct {
gsr_capture_nvfbc_params params;
@@ -283,16 +284,16 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, gsr_capture_metadata *captu
goto error_cleanup;
}
capture_metadata->width = self->tracking_width;
capture_metadata->height = self->tracking_height;
capture_metadata->video_width = self->tracking_width;
capture_metadata->video_height = self->tracking_height;
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
self->params.output_resolution = scale_keep_aspect_ratio((vec2i){capture_metadata->width, capture_metadata->height}, self->params.output_resolution);
capture_metadata->width = self->params.output_resolution.x;
capture_metadata->height = self->params.output_resolution.y;
self->params.output_resolution = scale_keep_aspect_ratio((vec2i){capture_metadata->video_width, capture_metadata->video_height}, self->params.output_resolution);
capture_metadata->video_width = self->params.output_resolution.x;
capture_metadata->video_height = self->params.output_resolution.y;
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
capture_metadata->width = self->params.region_size.x;
capture_metadata->height = self->params.region_size.y;
capture_metadata->video_width = self->params.region_size.x;
capture_metadata->video_height = self->params.region_size.y;
}
return 0;
@@ -302,6 +303,35 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, gsr_capture_metadata *captu
return -1;
}
static bool gsr_capture_nvfbc_is_capture_monitor_connected(gsr_capture_nvfbc *self) {
Display *dpy = gsr_window_get_display(self->params.egl->window);
int num_monitors = 0;
XRRMonitorInfo *monitors = XRRGetMonitors(dpy, DefaultRootWindow(dpy), True, &num_monitors);
if(!monitors)
return false;
bool capture_monitor_connected = false;
if(strcmp(self->params.display_to_capture, "screen") == 0) {
capture_monitor_connected = num_monitors > 0;
} else {
for(int i = 0; i < num_monitors; ++i) {
char *monitor_name = XGetAtomName(dpy, monitors[i].name);
if(!monitor_name)
continue;
if(strcmp(monitor_name, self->params.display_to_capture) == 0) {
capture_monitor_connected = true;
XFree(monitor_name);
break;
}
XFree(monitor_name);
}
}
XRRFreeMonitors(monitors);
return capture_monitor_connected;
}
static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
gsr_capture_nvfbc *self = cap->priv;
@@ -310,6 +340,13 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
const double now = clock_get_monotonic_seconds();
if(now - self->nvfbc_dead_start >= nvfbc_recreate_retry_time_seconds) {
self->nvfbc_dead_start = now;
/*
Do not attempt to recreate the nvfbc session if the monitor isn't turned on/connected.
This is to predict if the nvfbc session create below will fail since if it fails it leaks an x11 display (a bug in the nvidia driver).
*/
if(!gsr_capture_nvfbc_is_capture_monitor_connected(self))
return 0;
gsr_capture_nvfbc_destroy_session_and_handle(self);
if(gsr_capture_nvfbc_setup_handle(self) != 0) {
@@ -322,6 +359,7 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
return -1;
}
fprintf(stderr, "gsr info: gsr_capture_nvfbc_capture: recreated nvfbc session after modeset recovery\n");
self->nvfbc_needs_recreate = false;
} else {
return 0;
@@ -333,11 +371,8 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
if(self->params.region_size.x > 0 && self->params.region_size.y > 0)
frame_size = self->params.region_size;
const bool is_scaled = self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0;
vec2i output_size = is_scaled ? self->params.output_resolution : frame_size;
output_size = scale_keep_aspect_ratio(frame_size, output_size);
const vec2i target_pos = { max_int(0, capture_metadata->width / 2 - output_size.x / 2), max_int(0, capture_metadata->height / 2 - output_size.y / 2) };
const vec2i output_size = scale_keep_aspect_ratio(frame_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
NVFBC_FRAME_GRAB_INFO frame_info;
memset(&frame_info, 0, sizeof(frame_info));

View File

@@ -293,12 +293,12 @@ static int gsr_capture_portal_start(gsr_capture *cap, gsr_capture_metadata *capt
}
if(self->params.output_resolution.x == 0 && self->params.output_resolution.y == 0) {
capture_metadata->width = self->capture_size.x;
capture_metadata->height = self->capture_size.y;
capture_metadata->video_width = self->capture_size.x;
capture_metadata->video_height = self->capture_size.y;
} else {
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
capture_metadata->width = self->params.output_resolution.x;
capture_metadata->height = self->params.output_resolution.y;
capture_metadata->video_width = self->params.output_resolution.x;
capture_metadata->video_height = self->params.output_resolution.y;
}
return 0;
@@ -353,11 +353,8 @@ static int gsr_capture_portal_capture(gsr_capture *cap, gsr_capture_metadata *ca
}
}
const bool is_scaled = self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0;
vec2i output_size = is_scaled ? self->params.output_resolution : self->capture_size;
output_size = scale_keep_aspect_ratio(self->capture_size, output_size);
const vec2i target_pos = { max_int(0, capture_metadata->width / 2 - output_size.x / 2), max_int(0, capture_metadata->height / 2 - output_size.y / 2) };
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
const vec2i actual_texture_size = {self->pipewire_data.texture_width, self->pipewire_data.texture_height};

View File

@@ -104,11 +104,11 @@ static int gsr_capture_xcomposite_start(gsr_capture *cap, gsr_capture_metadata *
self->texture_size.y = self->window_texture.window_height;
if(self->params.output_resolution.x == 0 && self->params.output_resolution.y == 0) {
capture_metadata->width = self->texture_size.x;
capture_metadata->height = self->texture_size.y;
capture_metadata->video_width = self->texture_size.x;
capture_metadata->video_height = self->texture_size.y;
} else {
capture_metadata->width = self->params.output_resolution.x;
capture_metadata->height = self->params.output_resolution.y;
capture_metadata->video_width = self->params.output_resolution.x;
capture_metadata->video_height = self->params.output_resolution.y;
}
self->window_resize_timer = clock_get_monotonic_seconds();
@@ -224,7 +224,7 @@ static bool gsr_capture_xcomposite_should_stop(gsr_capture *cap, bool *err) {
return false;
}
static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata *capture_metdata, gsr_color_conversion *color_conversion) {
static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
gsr_capture_xcomposite *self = cap->priv;
if(self->clear_background) {
@@ -232,11 +232,8 @@ static int gsr_capture_xcomposite_capture(gsr_capture *cap, gsr_capture_metadata
gsr_color_conversion_clear(color_conversion);
}
const bool is_scaled = self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0;
vec2i output_size = is_scaled ? self->params.output_resolution : self->texture_size;
output_size = scale_keep_aspect_ratio(self->texture_size, output_size);
const vec2i target_pos = { max_int(0, capture_metdata->width / 2 - output_size.x / 2), max_int(0, capture_metdata->height / 2 - output_size.y / 2) };
const vec2i output_size = scale_keep_aspect_ratio(self->texture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
//self->params.egl->glFlush();
//self->params.egl->glFinish();

View File

@@ -59,14 +59,14 @@ static int gsr_capture_ximage_start(gsr_capture *cap, gsr_capture_metadata *capt
if(self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0) {
self->params.output_resolution = scale_keep_aspect_ratio(self->capture_size, self->params.output_resolution);
capture_metadata->width = self->params.output_resolution.x;
capture_metadata->height = self->params.output_resolution.y;
capture_metadata->video_width = self->params.output_resolution.x;
capture_metadata->video_height = self->params.output_resolution.y;
} else if(self->params.region_size.x > 0 && self->params.region_size.y > 0) {
capture_metadata->width = self->params.region_size.x;
capture_metadata->height = self->params.region_size.y;
capture_metadata->video_width = self->params.region_size.x;
capture_metadata->video_height = self->params.region_size.y;
} else {
capture_metadata->width = self->capture_size.x;
capture_metadata->height = self->capture_size.y;
capture_metadata->video_width = self->capture_size.x;
capture_metadata->video_height = self->capture_size.y;
}
self->texture_id = gl_create_texture(self->params.egl, self->capture_size.x, self->capture_size.y, GL_RGB8, GL_RGB, GL_LINEAR);
@@ -147,14 +147,11 @@ static bool gsr_capture_ximage_upload_to_texture(gsr_capture_ximage *self, int x
return success;
}
static int gsr_capture_ximage_capture(gsr_capture *cap, gsr_capture_metadata *capture_metdata, gsr_color_conversion *color_conversion) {
static int gsr_capture_ximage_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
gsr_capture_ximage *self = cap->priv;
const bool is_scaled = self->params.output_resolution.x > 0 && self->params.output_resolution.y > 0;
vec2i output_size = is_scaled ? self->params.output_resolution : self->capture_size;
output_size = scale_keep_aspect_ratio(self->capture_size, output_size);
const vec2i target_pos = { max_int(0, capture_metdata->width / 2 - output_size.x / 2), max_int(0, capture_metdata->height / 2 - output_size.y / 2) };
const vec2i output_size = scale_keep_aspect_ratio(self->capture_size, (vec2i){capture_metadata->recording_width, capture_metadata->recording_height});
const vec2i target_pos = { max_int(0, capture_metadata->video_width / 2 - output_size.x / 2), max_int(0, capture_metadata->video_height / 2 - output_size.y / 2) };
gsr_capture_ximage_upload_to_texture(self, self->capture_pos.x + self->params.region_position.x, self->capture_pos.y + self->params.region_position.y, self->capture_size.x, self->capture_size.y);
gsr_color_conversion_draw(color_conversion, self->texture_id,

View File

@@ -245,8 +245,8 @@ static bool gsr_video_encoder_vaapi_start(gsr_video_encoder *encoder, AVCodecCon
video_codec_context->width = FFALIGN(video_codec_context->width, 2);
video_codec_context->height = FFALIGN(video_codec_context->height, 2);
} else {
video_codec_context->width = FFALIGN(video_codec_context->width, 64);
video_codec_context->height = FFALIGN(video_codec_context->height, 16);
video_codec_context->width = FFALIGN(video_codec_context->width, 256);
video_codec_context->height = FFALIGN(video_codec_context->height, 256);
}
} else if(self->params.egl->gpu_info.vendor == GSR_GPU_VENDOR_AMD && video_codec_context->codec_id == AV_CODEC_ID_AV1) {
// TODO: Dont do this for VCN 5 and forward which should fix this hardware bug

View File

@@ -13,7 +13,6 @@
/* TODO: Support hdr/10-bit */
bool gsr_image_writer_init_opengl(gsr_image_writer *self, gsr_egl *egl, int width, int height) {
memset(self, 0, sizeof(*self));
self->source = GSR_IMAGE_WRITER_SOURCE_OPENGL;
self->egl = egl;
self->width = width;
self->height = height;
@@ -25,15 +24,6 @@ bool gsr_image_writer_init_opengl(gsr_image_writer *self, gsr_egl *egl, int widt
return true;
}
bool gsr_image_writer_init_memory(gsr_image_writer *self, const void *memory, int width, int height) {
memset(self, 0, sizeof(*self));
self->source = GSR_IMAGE_WRITER_SOURCE_OPENGL;
self->width = width;
self->height = height;
self->memory = memory;
return true;
}
void gsr_image_writer_deinit(gsr_image_writer *self) {
if(self->texture) {
self->egl->glDeleteTextures(1, &self->texture);
@@ -64,7 +54,6 @@ static bool gsr_image_writer_write_memory_to_file(gsr_image_writer *self, const
}
static bool gsr_image_writer_write_opengl_texture_to_file(gsr_image_writer *self, const char *filepath, gsr_image_format image_format, int quality) {
assert(self->source == GSR_IMAGE_WRITER_SOURCE_OPENGL);
uint8_t *frame_data = malloc(self->width * self->height * 4);
if(!frame_data) {
fprintf(stderr, "gsr error: gsr_image_writer_write_to_file: failed to allocate memory for image frame\n");
@@ -90,11 +79,5 @@ static bool gsr_image_writer_write_opengl_texture_to_file(gsr_image_writer *self
}
bool gsr_image_writer_write_to_file(gsr_image_writer *self, const char *filepath, gsr_image_format image_format, int quality) {
switch(self->source) {
case GSR_IMAGE_WRITER_SOURCE_OPENGL:
return gsr_image_writer_write_opengl_texture_to_file(self, filepath, image_format, quality);
case GSR_IMAGE_WRITER_SOURCE_MEMORY:
return gsr_image_writer_write_memory_to_file(self, filepath, image_format, quality, self->memory);
}
return false;
return gsr_image_writer_write_opengl_texture_to_file(self, filepath, image_format, quality);
}

View File

@@ -122,12 +122,12 @@ static void get_monitor_by_position_callback(const gsr_monitor *monitor, void *u
std::swap(monitor_size.x, monitor_size.y);
}
if(!data->output_name && data->position.x >= monitor_position.x && data->position.x <= monitor_position.x + monitor->size.x
&& data->position.y >= monitor_position.y && data->position.y <= monitor_position.y + monitor->size.y)
if(!data->output_name && data->position.x >= monitor_position.x && data->position.x <= monitor_position.x + monitor_size.x
&& data->position.y >= monitor_position.y && data->position.y <= monitor_position.y + monitor_size.y)
{
data->output_name = strdup(monitor->name);
data->monitor_pos = monitor_position;
data->monitor_size = monitor->size;
data->monitor_size = monitor_size;
}
}
@@ -1268,6 +1268,11 @@ static RecordingStartAudio* get_recording_start_item_by_stream_index(RecordingSt
return nullptr;
}
struct AudioPtsOffset {
int64_t pts_offset = 0;
int stream_index = 0;
};
static void save_replay_async(AVCodecContext *video_codec_context, int video_stream_index, const std::vector<AudioTrack> &audio_tracks, gsr_replay_buffer *replay_buffer, std::string output_dir, const char *container_format, const std::string &file_extension, bool date_folders, bool hdr, gsr_capture *capture, int current_save_replay_seconds) {
if(save_replay_thread.valid())
return;
@@ -1279,14 +1284,15 @@ static void save_replay_async(AVCodecContext *video_codec_context, int video_str
return;
}
const gsr_replay_buffer_iterator audio_start_iterator = gsr_replay_buffer_find_keyframe(replay_buffer, video_start_iterator, video_stream_index, true);
// if(audio_start_index == (size_t)-1) {
// fprintf(stderr, "gsr error: failed to save replay: failed to find an audio keyframe. perhaps replay was saved too fast, before anything has been recorded\n");
// return;
// }
const int64_t video_pts_offset = gsr_replay_buffer_iterator_get_packet(replay_buffer, video_start_iterator)->pts;
const int64_t audio_pts_offset = audio_start_iterator.packet_index == (size_t)-1 ? 0 : gsr_replay_buffer_iterator_get_packet(replay_buffer, audio_start_iterator)->pts;
std::vector<AudioPtsOffset> audio_pts_offsets;
audio_pts_offsets.reserve(audio_tracks.size());
for(const AudioTrack &audio_track : audio_tracks) {
const gsr_replay_buffer_iterator audio_start_iterator = gsr_replay_buffer_find_keyframe(replay_buffer, video_start_iterator, audio_track.stream_index, false);
const int64_t audio_pts_offset = audio_start_iterator.packet_index == (size_t)-1 ? 0 : gsr_replay_buffer_iterator_get_packet(replay_buffer, audio_start_iterator)->pts;
audio_pts_offsets.push_back(AudioPtsOffset{audio_pts_offset, audio_track.stream_index});
}
gsr_replay_buffer *cloned_replay_buffer = gsr_replay_buffer_clone(replay_buffer);
if(!cloned_replay_buffer) {
@@ -1302,7 +1308,7 @@ static void save_replay_async(AVCodecContext *video_codec_context, int video_str
save_replay_output_filepath = std::move(output_filepath);
save_replay_thread = std::async(std::launch::async, [video_stream_index, recording_start_result, video_start_iterator, video_pts_offset, audio_pts_offset, video_codec_context, cloned_replay_buffer]() mutable {
save_replay_thread = std::async(std::launch::async, [video_stream_index, recording_start_result, video_start_iterator, video_pts_offset, audio_pts_offsets{std::move(audio_pts_offsets)}, video_codec_context, cloned_replay_buffer]() mutable {
gsr_replay_buffer_iterator replay_iterator = video_start_iterator;
for(;;) {
AVPacket *replay_packet = gsr_replay_buffer_iterator_get_packet(cloned_replay_buffer, replay_iterator);
@@ -1350,8 +1356,10 @@ static void save_replay_async(AVCodecContext *video_codec_context, int video_str
stream = recording_start_audio->stream;
codec_context = audio_track->codec_context;
av_packet.pts -= audio_pts_offset;
av_packet.dts -= audio_pts_offset;
const AudioPtsOffset &audio_pts_offset = audio_pts_offsets[av_packet.stream_index - 1];
assert(audio_pts_offset.stream_index == av_packet.stream_index);
av_packet.pts -= audio_pts_offset.pts_offset;
av_packet.dts -= audio_pts_offset.pts_offset;
}
//av_packet.stream_index = stream->index;
@@ -1408,8 +1416,12 @@ static const AudioDevice* get_audio_device_by_name(const std::vector<AudioDevice
static MergedAudioInputs parse_audio_input_arg(const char *str) {
MergedAudioInputs result;
result.track_name = str;
split_string(str, '|', [&](const char *sub, size_t size) {
if(size == 0)
return true;
AudioInput audio_input;
audio_input.name.assign(sub, size);
@@ -1637,32 +1649,6 @@ static bool get_supported_video_codecs(gsr_egl *egl, gsr_video_codec video_codec
return false;
}
static bool get_supported_video_codecs_with_cpu_fallback(gsr_egl *egl, args_parser *args_parser, bool cleanup, gsr_supported_video_codecs *video_codecs) {
if(get_supported_video_codecs(egl, args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, cleanup, video_codecs))
return true;
if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU || !args_parser->fallback_cpu_encoding)
return false;
fprintf(stderr, "gsr warning: gpu encoding is not available on your system, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
if(get_supported_video_codecs(egl, GSR_VIDEO_CODEC_H264, true, cleanup, video_codecs)) {
if(args_parser->video_codec != (gsr_video_codec)GSR_VIDEO_CODEC_AUTO && args_parser->video_codec != GSR_VIDEO_CODEC_H264) {
fprintf(stderr, "gsr warning: cpu encoding is used but video codec isn't set to h264. Forcing video codec to h264\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
}
args_parser->video_encoder = GSR_VIDEO_ENCODER_HW_CPU;
if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU && args_parser->bitrate_mode == GSR_BITRATE_MODE_VBR) {
fprintf(stderr, "gsr warning: bitrate mode has been forcefully set to qp because software encoding option doesn't support vbr option\n");
args_parser->bitrate_mode = GSR_BITRATE_MODE_QP;
}
return true;
}
return false;
}
static void xwayland_check_callback(const gsr_monitor *monitor, void *userdata) {
bool *xwayland_found = (bool*)userdata;
if(monitor->name_len >= 8 && strncmp(monitor->name, "XWAYLAND", 8) == 0)
@@ -2302,7 +2288,7 @@ static int video_quality_to_image_quality_value(gsr_video_quality video_quality)
case GSR_VIDEO_QUALITY_HIGH:
return 85;
case GSR_VIDEO_QUALITY_VERY_HIGH:
return 90;
return 91; // Quality above 90 makes the jpeg image encoder (stb_image_writer) use yuv444 instead of yuv420, which greatly improves small colored text quality on dark background
case GSR_VIDEO_QUALITY_ULTRA:
return 97;
}
@@ -2318,8 +2304,10 @@ static void capture_image_to_file(args_parser &arg_parser, gsr_egl *egl, gsr_ima
gsr_capture *capture = create_capture_impl(arg_parser, egl, prefer_ximage);
gsr_capture_metadata capture_metadata;
capture_metadata.width = 0;
capture_metadata.height = 0;
capture_metadata.video_width = 0;
capture_metadata.video_height = 0;
capture_metadata.recording_width = 0;
capture_metadata.recording_height = 0;
capture_metadata.fps = fps;
int capture_result = gsr_capture_start(capture, &capture_metadata);
@@ -2328,8 +2316,11 @@ static void capture_image_to_file(args_parser &arg_parser, gsr_egl *egl, gsr_ima
_exit(capture_result);
}
capture_metadata.recording_width = capture_metadata.video_width;
capture_metadata.recording_height = capture_metadata.video_height;
gsr_image_writer image_writer;
if(!gsr_image_writer_init_opengl(&image_writer, egl, capture_metadata.width, capture_metadata.height)) {
if(!gsr_image_writer_init_opengl(&image_writer, egl, capture_metadata.video_width, capture_metadata.video_height)) {
fprintf(stderr, "gsr error: capture_image_to_file_wayland: gsr_image_write_gl_init failed\n");
_exit(1);
}
@@ -2341,7 +2332,7 @@ static void capture_image_to_file(args_parser &arg_parser, gsr_egl *egl, gsr_ima
color_conversion_params.load_external_image_shader = gsr_capture_uses_external_image(capture);
color_conversion_params.destination_textures[0] = image_writer.texture;
color_conversion_params.destination_textures_size[0] = { capture_metadata.width, capture_metadata.height };
color_conversion_params.destination_textures_size[0] = { capture_metadata.video_width, capture_metadata.video_height };
color_conversion_params.num_destination_textures = 1;
color_conversion_params.destination_color = GSR_DESTINATION_COLOR_RGB8;
@@ -2744,21 +2735,31 @@ static void print_codec_error(gsr_video_codec video_codec) {
" You can test this by running 'vainfo | grep VAEntrypointEncSlice' to see if it matches any H264/HEVC/AV1/VP8/VP9 profile.\n"
" On such distros, you need to manually install mesa from source to enable H264/HEVC hardware acceleration, or use a more user friendly distro. Alternatively record with AV1 if supported by your GPU.\n"
" You can alternatively use the flatpak version of GPU Screen Recorder (https://flathub.org/apps/com.dec05eba.gpu_screen_recorder) which bypasses system issues with patented H264/HEVC codecs.\n"
" Make sure you have mesa-extra freedesktop runtime installed when using the flatpak (this should be the default), which can be installed with this command:\n"
" flatpak install --system org.freedesktop.Platform.GL.default//23.08-extra\n"
" If your GPU doesn't support hardware accelerated video encoding then you can use '-encoder cpu' option to encode with your cpu instead.\n", video_codec_name, video_codec_name, video_codec_name);
" If your GPU doesn't support hardware accelerated video encoding then you can use '-fallback-cpu-encoding yes' option to encode with your cpu instead.\n", video_codec_name, video_codec_name, video_codec_name);
}
static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, bool is_flv, bool *low_power, gsr_supported_video_codecs *supported_video_codecs) {
static void force_cpu_encoding(args_parser *args_parser) {
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
args_parser->video_encoder = GSR_VIDEO_ENCODER_HW_CPU;
if(args_parser->bitrate_mode == GSR_BITRATE_MODE_VBR) {
fprintf(stderr, "gsr warning: bitrate mode has been forcefully set to qp because software encoding option doesn't support vbr option\n");
args_parser->bitrate_mode = GSR_BITRATE_MODE_QP;
}
}
static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, bool use_fallback_codec, bool *low_power, gsr_supported_video_codecs *supported_video_codecs) {
// TODO: software encoder for hevc, av1, vp8 and vp9
*low_power = false;
const AVCodec *video_codec_f = get_av_codec_if_supported(args_parser->video_codec, egl, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, supported_video_codecs);
if(!video_codec_f && !is_flv && args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU) {
if(!video_codec_f && use_fallback_codec && args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU) {
switch(args_parser->video_codec) {
case GSR_VIDEO_CODEC_H264: {
fprintf(stderr, "gsr warning: selected video codec h264 is not supported, trying hevc instead\n");
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
fprintf(stderr, "gsr error: selected video codec h264 is not supported\n");
if(args_parser->fallback_cpu_encoding) {
fprintf(stderr, "gsr warning: gpu encoding is not available on your system, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
force_cpu_encoding(args_parser);
}
break;
}
case GSR_VIDEO_CODEC_HEVC:
@@ -2766,14 +2767,14 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
case GSR_VIDEO_CODEC_HEVC_10BIT: {
fprintf(stderr, "gsr warning: selected video codec hevc is not supported, trying h264 instead\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
break;
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
}
case GSR_VIDEO_CODEC_AV1:
case GSR_VIDEO_CODEC_AV1_HDR:
case GSR_VIDEO_CODEC_AV1_10BIT: {
fprintf(stderr, "gsr warning: selected video codec av1 is not supported, trying h264 instead\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
break;
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
}
case GSR_VIDEO_CODEC_VP8:
case GSR_VIDEO_CODEC_VP9:
@@ -2783,23 +2784,23 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
fprintf(stderr, "gsr warning: selected video codec h264_vulkan is not supported, trying h264 instead\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
// Need to do a query again because this time it's without vulkan
if(!get_supported_video_codecs_with_cpu_fallback(egl, args_parser, true, supported_video_codecs)) {
if(!get_supported_video_codecs(egl, args_parser->video_codec, false, true, supported_video_codecs)) {
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
print_codec_error(args_parser->video_codec);
_exit(11);
}
break;
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
}
case GSR_VIDEO_CODEC_HEVC_VULKAN: {
fprintf(stderr, "gsr warning: selected video codec hevc_vulkan is not supported, trying hevc instead\n");
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
// Need to do a query again because this time it's without vulkan
if(!get_supported_video_codecs_with_cpu_fallback(egl, args_parser, true, supported_video_codecs)) {
if(!get_supported_video_codecs(egl, args_parser->video_codec, false, true, supported_video_codecs)) {
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
print_codec_error(args_parser->video_codec);
_exit(11);
}
break;
return pick_video_codec(egl, args_parser, true, low_power, supported_video_codecs);
}
}
@@ -2818,7 +2819,7 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
/* Returns -1 if none is available */
static gsr_video_codec select_appropriate_video_codec_automatically(gsr_capture_metadata capture_metadata, const gsr_supported_video_codecs *supported_video_codecs) {
const vec2i capture_size = {capture_metadata.width, capture_metadata.height};
const vec2i capture_size = {capture_metadata.video_width, capture_metadata.video_height};
if(supported_video_codecs->h264.supported && codec_supports_resolution(supported_video_codecs->h264.max_resolution, capture_size)) {
fprintf(stderr, "gsr info: using h264 encoder because a codec was not specified\n");
return GSR_VIDEO_CODEC_H264;
@@ -2839,19 +2840,9 @@ static gsr_video_codec select_appropriate_video_codec_automatically(gsr_capture_
static const AVCodec* select_video_codec_with_fallback(gsr_capture_metadata capture_metadata, args_parser *args_parser, const char *file_extension, gsr_egl *egl, bool *low_power) {
gsr_supported_video_codecs supported_video_codecs;
if(!get_supported_video_codecs_with_cpu_fallback(egl, args_parser, true, &supported_video_codecs)) {
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
print_codec_error(args_parser->video_codec);
_exit(11);
}
if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU && args_parser->video_codec != (gsr_video_codec)GSR_VIDEO_CODEC_AUTO && args_parser->video_codec != GSR_VIDEO_CODEC_H264) {
fprintf(stderr, "gsr warning: cpu encoding is used but video codec isn't set to h264. Forcing video codec to h264\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
}
if(args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU)
set_supported_video_codecs_ffmpeg(&supported_video_codecs, nullptr, egl->gpu_info.vendor);
get_supported_video_codecs(egl, args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, true, &supported_video_codecs);
// TODO: Use gsr_supported_video_codecs *supported_video_codecs_vulkan here to properly query vulkan video support
set_supported_video_codecs_ffmpeg(&supported_video_codecs, nullptr, egl->gpu_info.vendor);
const bool video_codec_auto = args_parser->video_codec == (gsr_video_codec)GSR_VIDEO_CODEC_AUTO;
if(video_codec_auto) {
@@ -2861,48 +2852,38 @@ static const AVCodec* select_video_codec_with_fallback(gsr_capture_metadata capt
} else if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU) {
fprintf(stderr, "gsr info: using h264 encoder because a codec was not specified\n");
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
} else {
} else if(args_parser->video_encoder != GSR_VIDEO_ENCODER_HW_CPU) {
args_parser->video_codec = select_appropriate_video_codec_automatically(capture_metadata, &supported_video_codecs);
if(args_parser->video_codec == (gsr_video_codec)-1) {
fprintf(stderr, "gsr error: no video encoder was specified and neither h264, hevc nor av1 are supported on your system or you are trying to capture at a resolution higher than your system supports for each codec\n");
_exit(52);
if(args_parser->fallback_cpu_encoding) {
fprintf(stderr, "gsr warning: gpu encoding is not available on your system or your gpu doesn't support recording at the resolution you are trying to record, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
force_cpu_encoding(args_parser);
} else {
fprintf(stderr, "gsr error: no video encoder was specified and neither h264, hevc nor av1 are supported on your system or you are trying to capture at a resolution higher than your system supports for each codec.\n");
fprintf(stderr, " Ensure that you have installed the proper vaapi driver. If your gpu doesn't support video encoding then you can run gpu-screen-recorder with \"-fallback-cpu-encoding yes\" option to use cpu encoding.\n");
_exit(52);
}
}
}
}
// TODO: Allow hevc, vp9 and av1 in (enhanced) flv (supported since ffmpeg 6.1)
const bool is_flv = strcmp(file_extension, "flv") == 0;
if(is_flv) {
if(LIBAVFORMAT_VERSION_INT < AV_VERSION_INT(60, 10, 100) && strcmp(file_extension, "flv") == 0) {
if(args_parser->video_codec != GSR_VIDEO_CODEC_H264) {
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
fprintf(stderr, "gsr warning: hevc/av1 is not compatible with flv, falling back to h264 instead.\n");
fprintf(stderr, "gsr warning: hevc/av1 is not compatible with flv in your outdated version of ffmpeg, falling back to h264 instead.\n");
}
// if(audio_codec != GSR_AUDIO_CODEC_AAC) {
// audio_codec_to_use = "aac";
// audio_codec = GSR_AUDIO_CODEC_AAC;
// fprintf(stderr, "gsr warning: flv only supports aac, falling back to aac instead.\n");
// }
}
const bool is_hls = strcmp(file_extension, "m3u8") == 0;
if(is_hls) {
} else if(strcmp(file_extension, "m3u8") == 0) {
if(video_codec_is_av1(args_parser->video_codec)) {
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
fprintf(stderr, "gsr warning: av1 is not compatible with hls (m3u8), falling back to hevc instead.\n");
}
// if(audio_codec != GSR_AUDIO_CODEC_AAC) {
// audio_codec_to_use = "aac";
// audio_codec = GSR_AUDIO_CODEC_AAC;
// fprintf(stderr, "gsr warning: hls (m3u8) only supports aac, falling back to aac instead.\n");
// }
}
const AVCodec *codec = pick_video_codec(egl, args_parser, is_flv, low_power, &supported_video_codecs);
const AVCodec *codec = pick_video_codec(egl, args_parser, true, low_power, &supported_video_codecs);
const vec2i codec_max_resolution = codec_get_max_resolution(args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, &supported_video_codecs);
const vec2i capture_size = {capture_metadata.width, capture_metadata.height};
const vec2i capture_size = {capture_metadata.video_width, capture_metadata.video_height};
if(!codec_supports_resolution(codec_max_resolution, capture_size)) {
const char *video_codec_name = video_codec_to_string(args_parser->video_codec);
fprintf(stderr, "gsr error: The max resolution for video codec %s is %dx%d while you are trying to capture at resolution %dx%d. Change capture resolution or video codec and try again\n",
@@ -2930,7 +2911,7 @@ static std::vector<AudioDeviceData> create_device_audio_inputs(const std::vector
audio_device.sound_device.frames = 0;
} else {
const std::string description = "gsr-" + audio_input.name;
if(sound_device_get_by_name(&audio_device.sound_device, audio_input.name.c_str(), description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
if(sound_device_get_by_name(&audio_device.sound_device, description.c_str(), audio_input.name.c_str(), description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
fprintf(stderr, "gsr error: failed to get \"%s\" audio device\n", audio_input.name.c_str());
_exit(1);
}
@@ -2955,17 +2936,12 @@ static AudioDeviceData create_application_audio_audio_input(const MergedAudioInp
fprintf(stderr, "gsr error: failed to generate random string\n");
_exit(1);
}
std::string combined_sink_name = "gsr-combined-";
combined_sink_name.append(random_str, sizeof(random_str));
if(!gsr_pipewire_audio_create_virtual_sink(pipewire_audio, combined_sink_name.c_str())) {
fprintf(stderr, "gsr error: failed to create virtual sink for application audio\n");
_exit(1);
}
combined_sink_name += ".monitor";
if(sound_device_get_by_name(&audio_device.sound_device, combined_sink_name.c_str(), "gpu-screen-recorder", num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
if(sound_device_get_by_name(&audio_device.sound_device, combined_sink_name.c_str(), "", "gpu-screen-recorder", num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
fprintf(stderr, "gsr error: failed to setup audio recording to combined sink\n");
_exit(1);
}
@@ -2986,19 +2962,19 @@ static AudioDeviceData create_application_audio_audio_input(const MergedAudioInp
}
if(!audio_devices_sources.empty()) {
if(!gsr_pipewire_audio_add_link_from_sources_to_sink(pipewire_audio, audio_devices_sources.data(), audio_devices_sources.size(), combined_sink_name.c_str())) {
if(!gsr_pipewire_audio_add_link_from_sources_to_stream(pipewire_audio, audio_devices_sources.data(), audio_devices_sources.size(), combined_sink_name.c_str())) {
fprintf(stderr, "gsr error: failed to add application audio link\n");
_exit(1);
}
}
if(app_audio_inverted) {
if(!gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
if(!gsr_pipewire_audio_add_link_from_apps_to_stream_inverted(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
fprintf(stderr, "gsr error: failed to add application audio link\n");
_exit(1);
}
} else {
if(!gsr_pipewire_audio_add_link_from_apps_to_sink(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
if(!gsr_pipewire_audio_add_link_from_apps_to_stream(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
fprintf(stderr, "gsr error: failed to add application audio link\n");
_exit(1);
}
@@ -3105,6 +3081,7 @@ int main(int argc, char **argv) {
// Linux nvidia driver 580.105.08 added the environment variable CUDA_DISABLE_PERF_BOOST to disable the p2 power level issue,
// where running cuda (which includes nvenc) causes the gpu to be forcefully set to p2 power level which on many nvidia gpus
// decreases gpu performance in games. On my GTX 1080 it decreased game performance by 10% for absolutely no reason.
// TODO: This only seems to allow the gpu to go to lower power level states, but not higher than p2.
setenv("CUDA_DISABLE_PERF_BOOST", "1", true);
// Stop nvidia driver from buffering frames
setenv("__GL_MaxFramesAllowed", "1", true);
@@ -3301,8 +3278,10 @@ int main(int argc, char **argv) {
gsr_capture *capture = create_capture_impl(arg_parser, &egl, false);
gsr_capture_metadata capture_metadata;
capture_metadata.width = 0;
capture_metadata.height = 0;
capture_metadata.video_width = 0;
capture_metadata.video_height = 0;
capture_metadata.recording_width = 0;
capture_metadata.recording_height = 0;
capture_metadata.fps = arg_parser.fps;
int capture_result = gsr_capture_start(capture, &capture_metadata);
@@ -3323,11 +3302,16 @@ int main(int argc, char **argv) {
AVStream *video_stream = nullptr;
std::vector<AudioTrack> audio_tracks;
if(arg_parser.video_encoder == GSR_VIDEO_ENCODER_HW_CPU && arg_parser.video_codec != (gsr_video_codec)GSR_VIDEO_CODEC_AUTO && arg_parser.video_codec != GSR_VIDEO_CODEC_H264) {
fprintf(stderr, "gsr error: -encoder cpu was specified but a codec other than h264 was specified. -encoder cpu supports only h264 at the moment\n");
_exit(1);
}
bool low_power = false;
const AVCodec *video_codec_f = select_video_codec_with_fallback(capture_metadata, &arg_parser, file_extension.c_str(), &egl, &low_power);
const enum AVPixelFormat video_pix_fmt = get_pixel_format(arg_parser.video_codec, egl.gpu_info.vendor, arg_parser.video_encoder == GSR_VIDEO_ENCODER_HW_CPU);
AVCodecContext *video_codec_context = create_video_codec_context(video_pix_fmt, video_codec_f, egl, arg_parser, capture_metadata.width, capture_metadata.height);
AVCodecContext *video_codec_context = create_video_codec_context(video_pix_fmt, video_codec_f, egl, arg_parser, capture_metadata.video_width, capture_metadata.video_height);
if(!is_replaying)
video_stream = create_stream(av_format_context, video_codec_context);
@@ -3337,8 +3321,8 @@ int main(int argc, char **argv) {
_exit(1);
}
video_frame->format = video_codec_context->pix_fmt;
video_frame->width = capture_metadata.width;
video_frame->height = capture_metadata.height;
video_frame->width = capture_metadata.video_width;
video_frame->height = capture_metadata.video_height;
video_frame->color_range = video_codec_context->color_range;
video_frame->color_primaries = video_codec_context->color_primaries;
video_frame->color_trc = video_codec_context->color_trc;
@@ -3363,9 +3347,12 @@ int main(int argc, char **argv) {
_exit(1);
}
capture_metadata.recording_width = capture_metadata.video_width;
capture_metadata.recording_height = capture_metadata.video_height;
// TODO: What if this updated resolution is above max resolution?
capture_metadata.width = video_codec_context->width;
capture_metadata.height = video_codec_context->height;
capture_metadata.video_width = video_codec_context->width;
capture_metadata.video_height = video_codec_context->height;
const Arg *plugin_arg = args_parser_get_arg(&arg_parser, "-p");
assert(plugin_arg);
@@ -3378,8 +3365,8 @@ int main(int argc, char **argv) {
assert(color_depth == GSR_COLOR_DEPTH_8_BITS || color_depth == GSR_COLOR_DEPTH_10_BITS);
const gsr_plugin_init_params plugin_init_params = {
(unsigned int)capture_metadata.width,
(unsigned int)capture_metadata.height,
(unsigned int)capture_metadata.video_width,
(unsigned int)capture_metadata.video_height,
(unsigned int)arg_parser.fps,
color_depth == GSR_COLOR_DEPTH_8_BITS ? GSR_PLUGIN_COLOR_DEPTH_8_BITS : GSR_PLUGIN_COLOR_DEPTH_10_BITS,
egl.context_type == GSR_GL_CONTEXT_TYPE_GLX ? GSR_PLUGIN_GRAPHICS_API_GLX : GSR_PLUGIN_GRAPHICS_API_EGL_ES,
@@ -3565,7 +3552,7 @@ int main(int argc, char **argv) {
while(running) {
void *sound_buffer;
int sound_buffer_size = -1;
//const double time_before_read_seconds = clock_get_monotonic_seconds();
const double time_before_read_seconds = clock_get_monotonic_seconds();
if(audio_device.sound_device.handle) {
// TODO: use this instead of calculating time to read. But this can fluctuate and we dont want to go back in time,
// also it's 0.0 for some users???
@@ -3575,8 +3562,6 @@ int main(int argc, char **argv) {
const bool got_audio_data = sound_buffer_size >= 0;
//fprintf(stderr, "got audio data: %s\n", got_audio_data ? "yes" : "no");
//const double time_after_read_seconds = clock_get_monotonic_seconds();
//const double time_to_read_seconds = time_after_read_seconds - time_before_read_seconds;
//fprintf(stderr, "time to read: %f, %s, %f\n", time_to_read_seconds, got_audio_data ? "yes" : "no", timeout_sec);
const double this_audio_frame_time = clock_get_monotonic_seconds() - paused_time_offset;
@@ -3645,10 +3630,9 @@ int main(int argc, char **argv) {
}
}
if(!audio_device.sound_device.handle)
if(!audio_device.sound_device.handle) {
av_usleep(timeout_ms * 1000);
if(got_audio_data) {
} else if(got_audio_data) {
// TODO: Instead of converting audio, get float audio from alsa. Or does alsa do conversion internally to get this format?
if(needs_audio_conversion)
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size);
@@ -3676,6 +3660,13 @@ int main(int argc, char **argv) {
audio_device.frame->pts += audio_track.codec_context->frame_size;
num_received_frames++;
} else {
// TODO: Maybe sleep for time_to_sleep_until_next_frame/4? for better latency
const double time_after_read_seconds = clock_get_monotonic_seconds();
const double time_to_read_seconds = time_after_read_seconds - time_before_read_seconds;
const double time_to_sleep_until_next_frame = timeout_sec - time_to_read_seconds;
if(time_to_sleep_until_next_frame > 0.0)
av_usleep(time_to_sleep_until_next_frame * 1000ULL * 1000ULL);
}
}
@@ -3819,9 +3810,9 @@ int main(int argc, char **argv) {
if(plugins.num_plugins > 0) {
gsr_plugins_draw(&plugins);
gsr_color_conversion_draw(&color_conversion, plugins.texture,
{0, 0}, {capture_metadata.width, capture_metadata.height},
{0, 0}, {capture_metadata.width, capture_metadata.height},
{capture_metadata.width, capture_metadata.height}, GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
{0, 0}, {capture_metadata.video_width, capture_metadata.video_height},
{0, 0}, {capture_metadata.video_width, capture_metadata.video_height},
{capture_metadata.video_width, capture_metadata.video_height}, GSR_ROT_0, GSR_SOURCE_COLOR_RGB, false);
}
if(capture_has_synchronous_task) {

View File

@@ -58,6 +58,29 @@ static bool requested_link_matches_name_case_insensitive(const gsr_pipewire_audi
return false;
}
static bool requested_link_matches_name_case_insensitive_any_type(const gsr_pipewire_audio *self, const gsr_pipewire_audio_requested_link *requested_link, const char *name) {
for(int i = 0; i < requested_link->num_outputs; ++i) {
switch(requested_link->outputs[i].type) {
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_STANDARD: {
if(strcasecmp(requested_link->outputs[i].name, name) == 0)
return true;
break;
}
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_OUTPUT: {
if(strcasecmp(self->default_output_device_name, name) == 0)
return true;
break;
}
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_INPUT: {
if(strcasecmp(self->default_input_device_name, name) == 0)
return true;
break;
}
}
}
return false;
}
static bool requested_link_has_type(const gsr_pipewire_audio_requested_link *requested_link, gsr_pipewire_audio_requested_type type) {
for(int i = 0; i < requested_link->num_outputs; ++i) {
if(requested_link->outputs[i].type == type)
@@ -168,7 +191,7 @@ static void gsr_pipewire_audio_create_link(gsr_pipewire_audio *self, const gsr_p
if(output_node->type != requested_link->output_type)
continue;
const bool requested_link_matches_app = requested_link_matches_name_case_insensitive(requested_link, output_node->name);
const bool requested_link_matches_app = requested_link_matches_name_case_insensitive_any_type(self, requested_link, output_node->name);
if(requested_link->inverted) {
if(requested_link_matches_app)
continue;
@@ -642,20 +665,6 @@ void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self) {
pw_thread_loop_stop(self->thread_loop);
}
for(size_t i = 0; i < self->num_virtual_sink_proxies; ++i) {
if(self->virtual_sink_proxies[i]) {
pw_proxy_destroy(self->virtual_sink_proxies[i]);
self->virtual_sink_proxies[i] = NULL;
}
}
self->num_virtual_sink_proxies = 0;
self->virtual_sink_proxies_capacity_items = 0;
if(self->virtual_sink_proxies) {
free(self->virtual_sink_proxies);
self->virtual_sink_proxies = NULL;
}
if(self->metadata_proxy) {
spa_hook_remove(&self->metadata_listener);
spa_hook_remove(&self->metadata_proxy_listener);
@@ -733,54 +742,6 @@ void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self) {
#endif
}
static struct pw_properties* gsr_pipewire_create_null_audio_sink(const char *name) {
char props_str[512];
snprintf(props_str, sizeof(props_str),
"{ factory.name=support.null-audio-sink node.name=\"%s\" media.class=Audio/Sink object.linger=false audio.position=[FL FR]"
" monitor.channel-volumes=true monitor.passthrough=true adjust_time=0 node.description=gsr-app-sink slaves=\"\" priority.driver=1 priority.session=1 }", name);
struct pw_properties *props = pw_properties_new_string(props_str);
if(!props) {
fprintf(stderr, "gsr error: gsr_pipewire_create_null_audio_sink: failed to create virtual sink properties\n");
return NULL;
}
return props;
}
bool gsr_pipewire_audio_create_virtual_sink(gsr_pipewire_audio *self, const char *name) {
if(!array_ensure_capacity((void**)&self->virtual_sink_proxies, self->num_virtual_sink_proxies, &self->virtual_sink_proxies_capacity_items, sizeof(struct pw_proxy*)))
return false;
pw_thread_loop_lock(self->thread_loop);
struct pw_properties *virtual_sink_props = gsr_pipewire_create_null_audio_sink(name);
if(!virtual_sink_props) {
pw_thread_loop_unlock(self->thread_loop);
return false;
}
struct pw_proxy *virtual_sink_proxy = pw_core_create_object(self->core, "adapter", PW_TYPE_INTERFACE_Node, PW_VERSION_NODE, &virtual_sink_props->dict, 0);
// TODO:
// If these are done then the above needs sizeof(*self) as the last argument
//pw_proxy_add_object_listener(virtual_sink_proxy, &pd->object_listener, &node_events, self);
//pw_proxy_add_listener(virtual_sink_proxy, &pd->proxy_listener, &proxy_events, self);
// TODO: proxy
pw_properties_free(virtual_sink_props);
if(!virtual_sink_proxy) {
fprintf(stderr, "gsr error: gsr_pipewire_audio_create_virtual_sink: failed to create virtual sink\n");
pw_thread_loop_unlock(self->thread_loop);
return false;
}
self->server_version_sync = pw_core_sync(self->core, PW_ID_CORE, self->server_version_sync);
pw_thread_loop_wait(self->thread_loop);
pw_thread_loop_unlock(self->thread_loop);
self->virtual_sink_proxies[self->num_virtual_sink_proxies] = virtual_sink_proxy;
++self->num_virtual_sink_proxies;
return true;
}
static bool string_remove_suffix(char *str, const char *suffix) {
int str_len = strlen(str);
int suffix_len = strlen(suffix);
@@ -834,6 +795,7 @@ static bool gsr_pipewire_audio_add_links_to_output(gsr_pipewire_audio *self, con
self->requested_links[self->num_requested_links].inverted = inverted;
++self->num_requested_links;
gsr_pipewire_audio_create_link(self, &self->requested_links[self->num_requested_links - 1]);
// TODO: Remove these?
gsr_pipewire_audio_create_link_for_default_devices(self, &self->requested_links[self->num_requested_links - 1], GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_OUTPUT);
gsr_pipewire_audio_create_link_for_default_devices(self, &self->requested_links[self->num_requested_links - 1], GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_INPUT);
pw_thread_loop_unlock(self->thread_loop);
@@ -865,6 +827,10 @@ bool gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(gsr_pipewire_audio *
return gsr_pipewire_audio_add_links_to_output(self, app_names, num_app_names, sink_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_STREAM_OUTPUT, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_SINK, true);
}
bool gsr_pipewire_audio_add_link_from_sources_to_stream(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *stream_name_input) {
return gsr_pipewire_audio_add_links_to_output(self, source_names, num_source_names, stream_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_SINK_OR_SOURCE, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_STREAM, false);
}
bool gsr_pipewire_audio_add_link_from_sources_to_sink(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *sink_name_input) {
return gsr_pipewire_audio_add_links_to_output(self, source_names, num_source_names, sink_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_SINK_OR_SOURCE, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_SINK, false);
}

View File

@@ -280,7 +280,8 @@ static void on_param_changed_cb(void *user_data, uint32_t id, const struct spa_p
self->format.info.raw.format,
spa_debug_type_find_name(spa_type_video_format, self->format.info.raw.format));
if(has_modifier) {
self->has_modifier = has_modifier;
if(self->has_modifier) {
fprintf(stderr, "gsr info: pipewire: Modifier: 0x%" PRIx64 "\n", self->format.info.raw.modifier);
}
@@ -736,7 +737,6 @@ void gsr_pipewire_video_deinit(gsr_pipewire_video *self) {
self->dmabuf_num_planes = 0;
self->negotiated = false;
self->renegotiated = false;
if(self->mutex_initialized) {
pthread_mutex_destroy(&self->mutex);
@@ -783,21 +783,20 @@ static EGLImage gsr_pipewire_video_create_egl_image_with_fallback(gsr_pipewire_v
}
EGLImage image = NULL;
if(self->no_modifiers_fallback) {
if(self->no_modifiers_fallback || !self->has_modifier) {
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, false);
} else {
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, true);
if(!image) {
if(self->renegotiated || self->format.info.raw.modifier == DRM_FORMAT_MOD_INVALID) {
if(self->format.info.raw.modifier == DRM_FORMAT_MOD_INVALID) {
fprintf(stderr, "gsr error: gsr_pipewire_video_create_egl_image_with_fallback: failed to create egl image with modifiers, trying without modifiers\n");
self->no_modifiers_fallback = true;
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, false);
} else {
fprintf(stderr, "gsr error: gsr_pipewire_video_create_egl_image_with_fallback: failed to create egl image with modifiers, renegotiating with a different modifier\n");
self->negotiated = false;
self->renegotiated = true;
gsr_pipewire_video_remove_modifier(self, self->format.info.raw.modifier);
pw_thread_loop_lock(self->thread_loop);
gsr_pipewire_video_remove_modifier(self, self->format.info.raw.modifier);
pw_loop_signal_event(pw_thread_loop_get_loop(self->thread_loop), self->reneg);
pw_thread_loop_unlock(self->thread_loop);
}

View File

@@ -189,7 +189,7 @@ static pa_handle* pa_sound_device_new(const char *server,
snprintf(p->stream_name, sizeof(p->stream_name), "%s", stream_name);
p->reconnect = true;
p->reconnect_last_tried_seconds = clock_get_monotonic_seconds() - 1000.0;
p->reconnect_last_tried_seconds = clock_get_monotonic_seconds() - (RECONNECT_TRY_TIMEOUT_SECONDS * 1000.0 * 2.0);
p->default_output_device_name[0] = '\0';
p->default_input_device_name[0] = '\0';
p->device_type = DeviceType::STANDARD;
@@ -206,10 +206,17 @@ static pa_handle* pa_sound_device_new(const char *server,
p->output_length = buffer_size;
p->output_index = 0;
pa_proplist *proplist = pa_proplist_new();
pa_proplist_sets(proplist, PA_PROP_MEDIA_ROLE, "production");
if(strcmp(device_name, "") == 0) {
pa_proplist_sets(proplist, "node.autoconnect", "false");
pa_proplist_sets(proplist, "node.dont-reconnect", "true");
}
if (!(p->mainloop = pa_mainloop_new()))
goto fail;
if (!(p->context = pa_context_new(pa_mainloop_get_api(p->mainloop), name)))
if (!(p->context = pa_context_new_with_proplist(pa_mainloop_get_api(p->mainloop), name, proplist)))
goto fail;
if (pa_context_connect(p->context, server, PA_CONTEXT_NOFLAGS, NULL) < 0) {
@@ -239,12 +246,14 @@ static pa_handle* pa_sound_device_new(const char *server,
if(pa)
pa_operation_unref(pa);
pa_proplist_free(proplist);
return p;
fail:
if (rerror)
*rerror = error;
pa_sound_device_free(p);
pa_proplist_free(proplist);
return NULL;
}
@@ -283,20 +292,6 @@ static bool pa_sound_device_handle_reconnect(pa_handle *p, char *device_name, si
return false;
}
for(;;) {
pa_stream_state_t state = pa_stream_get_state(p->stream);
if(state == PA_STREAM_READY)
break;
if(!PA_STREAM_IS_GOOD(state)) {
//pa_context_errno(p->context);
return false;
}
pa_mainloop_iterate(p->mainloop, 1, NULL);
}
std::lock_guard<std::mutex> lock(p->reconnect_mutex);
p->reconnect = false;
return true;
@@ -317,6 +312,10 @@ static int pa_sound_device_read(pa_handle *p, double timeout_seconds) {
if(!pa_sound_device_handle_reconnect(p, device_name, sizeof(device_name), start_time))
goto fail;
pa_mainloop_iterate(p->mainloop, 0, NULL);
if(pa_stream_get_state(p->stream) != PA_STREAM_READY)
goto fail;
CHECK_DEAD_GOTO(p, rerror, fail);
while (p->output_index < p->output_length) {
@@ -410,7 +409,7 @@ static int audio_format_to_get_bytes_per_sample(AudioFormat audio_format) {
return 2;
}
int sound_device_get_by_name(SoundDevice *device, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format) {
int sound_device_get_by_name(SoundDevice *device, const char *node_name, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format) {
pa_sample_spec ss;
ss.format = audio_format_to_pulse_audio_format(audio_format);
ss.rate = 48000;
@@ -424,7 +423,7 @@ int sound_device_get_by_name(SoundDevice *device, const char *device_name, const
buffer_attr.maxlength = buffer_attr.fragsize;
int error = 0;
pa_handle *handle = pa_sound_device_new(nullptr, description, device_name, description, &ss, &buffer_attr, &error);
pa_handle *handle = pa_sound_device_new(nullptr, node_name, device_name, description, &ss, &buffer_attr, &error);
if(!handle) {
fprintf(stderr, "gsr error: pa_sound_device_new() failed: %s. Audio input device %s might not be valid\n", pa_strerror(error), device_name);
return -1;