Compare commits

...

6 Commits
5.6.6 ... 5.6.7

Author SHA1 Message Date
dec05eba
d3235a0be0 5.6.7 2025-09-06 19:15:08 +02:00
dec05eba
d4ee27716a Cleanup debug output 2025-09-06 01:26:12 +02:00
dec05eba
fcb45b82f2 Re-add portal damage tracking (-fm content) 2025-09-06 01:24:54 +02:00
dec05eba
59d16899ab Use pipewire audio routing to merge audio when possible (this fixes out of sync audio when using multiple audio inputs for some users) 2025-09-06 00:18:12 +02:00
LinuxUserGD
f3fb8c4a93 main: check if glibc is defined
musl libc doesn't implement M_MMAP_THRESHOLD
2025-09-03 18:19:15 +02:00
dec05eba
c073d43e30 Update README 2025-09-03 18:19:05 +02:00
6 changed files with 76 additions and 52 deletions

View File

@@ -67,7 +67,7 @@ Here are some known unofficial packages:
* Debian/Ubuntu: [Pacstall](https://pacstall.dev/packages/gpu-screen-recorder)
* Nix: [NixOS wiki](https://wiki.nixos.org/wiki/Gpu-screen-recorder)
* openSUSE: [openSUSE software repository](https://software.opensuse.org/package/gpu-screen-recorder)
* Fedora: [Copr](https://copr.fedorainfracloud.org/coprs/brycensranch/gpu-screen-recorder-git/)
* Fedora, CentOS: [Copr](https://copr.fedorainfracloud.org/coprs/brycensranch/gpu-screen-recorder-git/)
* OpenMandriva: [gpu-screen-recorder](https://github.com/OpenMandrivaAssociation/gpu-screen-recorder)
* Solus: [gpu-screen-recorder](https://github.com/getsolus/packages/tree/main/packages/g/gpu-screen-recorder)
* Nobara: [Nobara wiki](https://wiki.nobaraproject.org/en/general-usage/additional-software/GPU-Screen-Recorder)
@@ -169,6 +169,11 @@ An example plugin can be found at `plugin/examples/hello_triangle`.\
Run `gpu-screen-recorder` with the `-p` option to specify a plugin to load, for example `gpu-screen-recorder -w screen -p ./triangle.so -o video.mp4`.
`-p` can be specified multiple times to load multiple plugins.\
Build GPU Screen Recorder with the `-Dplugin_examples=true` meson option to build plugin examples.
## Smoother recording
If you record at your monitors refresh rate and enabled vsync in a game then there might be a desync between the game updating a frame and GPU Screen Recorder capturing a frame.
This is an issue in some games.
If you experience this issue then you might want to either disable vsync in the game or use the `-fm content` option to sync capture to the content on the screen. For example: `gpu-screen-recorder -w screen -fm content -o video.mp4`.\
Note that this option is currently only available on X11, or with desktop portal capture on Wayland (`-w portal`).
# Issues
## NVIDIA
Nvidia drivers have an issue where CUDA breaks if CUDA is running when suspend/hibernation happens, and it remains broken until you reload the nvidia driver. `extra/gsr-nvidia.conf` will be installed by default when you install GPU Screen Recorder and that should fix this issue. If this doesn't fix the issue for you then your distro may use a different path for modprobe files. In that case you have to install that `extra/gsr-nvidia.conf` yourself into that location.
@@ -223,4 +228,4 @@ then GPU Screen Recorder will automatically use that same GPU for recording and
## The rotation of the video is incorrect when the monitor is rotated when using desktop portal capture
This is a bug in kde plasma wayland. When using desktop portal capture and the monitor is rotated and a window is made fullscreen kde plasma wayland will give incorrect rotation to GPU Screen Recorder.
This also affects other screen recording software such as obs studio.\
Capture a monitor directly instead to workaround this issue until kde plasma devs fix it, or use another wayland compositor that doesn't have this issue.
Capture a monitor directly instead to workaround this issue until kde plasma devs fix it, or use another wayland compositor that doesn't have this issue.

19
TODO
View File

@@ -71,26 +71,16 @@ Test if p2 state can be worked around by using pure nvenc api and overwriting cu
Drop frames if live streaming cant keep up with target fps, or dynamically change resolution/quality.
Support low power option.
Instead of sending a big list of drm data back to kms client, send the monitor we want to record to kms server and the server should respond with only the matching monitor, and cursor.
Tonemap hdr to sdr when hdr is enabled and when hevc_hdr/av1_hdr is not used.
Add 10 bit record option, h264_10bit, hevc_10bit and av1_10bit.
Rotate cursor texture properly (around top left origin).
Setup hardware video context so we can query constraints and capabilities for better default and better error messages.
Use CAP_SYS_NICE in flatpak too on the main gpu screen recorder binary. It makes recording smoother, especially with constant framerate.
Modify ffmpeg to accept opengl texture for nvenc encoding. Removes extra buffers and copies.
When vulkan encode is added, mention minimum nvidia driver required. (550.54.14?).
Support drm plane rotation. Neither X11 nor any Wayland compositor currently rotates drm planes so this might not be needed.
Investigate if there is a way to do gpu->gpu copy directly without touching system ram to enable video encoding on a different gpu. On nvidia this is possible with cudaMemcpyPeer, but how about from an intel/amd gpu to an nvidia gpu or the other way around or any combination of iGPU and dedicated GPU?
Maybe something with clEnqueueMigrateMemObjects? on AMD something with DirectGMA maybe?
@@ -104,6 +94,9 @@ Enable b-frames.
Support vfr matching games exact fps all the time. On x11 use damage tracking, on wayland? maybe there is drm plane damage tracking. But that may not be accurate as the compositor may update it every monitor hz anyways. On wayland maybe only support it for desktop portal + pipewire capture.
Another method to track damage that works regardless of the display server would be to do a diff between frames with a shader.
A 1x1 texture could be created and then write to the texture with imageStore in glsl.
Multiple textures aren't needed for diff, the diff between the color conversion output can be done by using it as an input
as well, which would diff it against the previous frame.
Support selecting which gpu to use. This can be done in egl with eglQueryDevicesEXT and then eglGetPlatformDisplayEXT. This will automatically work on AMD and Intel as vaapi uses the same device. On nvidia we need to use eglQueryDeviceAttribEXT with EGL_CUDA_DEVICE_NV.
Maybe on glx (nvidia x11 nvfbc) we need to use __NV_PRIME_RENDER_OFFLOAD, __NV_PRIME_RENDER_OFFLOAD_PROVIDER, __GLX_VENDOR_LIBRARY_NAME, __VK_LAYER_NV_optimus, VK_ICD_FILENAMES instead. Just look at prime-run /usr/bin/prime-run.
@@ -324,3 +317,9 @@ It's possible for microphone audio to get desynced when recording together with
We can use dri2connect/dri3open to get the /dev/dri/card device. Note that this doesn't work on nvidia x11.
Add support for QVBR (QP with target bitrate).
KDE Plasma Wayland seems to use overlay planes now in non-fullscreen mode(limited to 1 overlay plane per gpu). Check if this is the case in the latest kde on arch linux.
If it is, then support it in kms capture.
Check if pipewire audio link-factory is available before attempting to use app audio or merging audio with pipewire.
Also do the same in supports_app_audio check in gpu-screen-recorder --info output.

View File

@@ -1,4 +1,4 @@
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.6.6', default_options : ['warning_level=2'])
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.6.7', default_options : ['warning_level=2'])
add_project_arguments('-Wshadow', language : ['c', 'cpp'])
if get_option('buildtype') == 'debug'

View File

@@ -1,7 +1,7 @@
[package]
name = "gpu-screen-recorder"
type = "executable"
version = "5.6.6"
version = "5.6.7"
platforms = ["posix"]
[config]

View File

@@ -76,6 +76,12 @@ static const int VIDEO_STREAM_INDEX = 0;
static thread_local char av_error_buffer[AV_ERROR_MAX_STRING_SIZE];
enum class AudioMergeType {
NONE,
AMIX,
PIPEWIRE
};
typedef struct {
const gsr_window *window;
} MonitorOutputCallbackUserdata;
@@ -3052,7 +3058,7 @@ static void set_display_server_environment_variables() {
int main(int argc, char **argv) {
setlocale(LC_ALL, "C"); // Sigh... stupid C
#ifdef __linux__
#ifdef __GLIBC__
mallopt(M_MMAP_THRESHOLD, 65536);
#endif
@@ -3119,12 +3125,24 @@ int main(int argc, char **argv) {
std::vector<MergedAudioInputs> requested_audio_inputs = parse_audio_inputs(audio_devices, audio_input_arg);
const bool uses_app_audio = merged_audio_inputs_has_app_audio(requested_audio_inputs);
AudioMergeType audio_merge_type = AudioMergeType::NONE;
std::vector<std::string> app_audio_names;
#ifdef GSR_APP_AUDIO
const bool audio_server_is_pipewire = audio_input_arg->num_values > 0 && pulseaudio_server_is_pipewire();
if(merged_audio_inputs_should_use_amix(requested_audio_inputs)) {
if(audio_server_is_pipewire || uses_app_audio)
audio_merge_type = AudioMergeType::PIPEWIRE;
else
audio_merge_type = AudioMergeType::AMIX;
}
gsr_pipewire_audio pipewire_audio;
memset(&pipewire_audio, 0, sizeof(pipewire_audio));
if(uses_app_audio) {
if(!pulseaudio_server_is_pipewire()) {
// TODO: When recording multiple audio devices and merging them (for example desktop audio and microphone) then one (or more) of the audio sources
// can get desynced. I'm unable to reproduce this but some others are. Instead of merging audio with ffmpeg amix, merge audio with pipewire (if available).
// This fixes the issue for people that had the issue.
if(audio_merge_type == AudioMergeType::PIPEWIRE || uses_app_audio) {
if(!audio_server_is_pipewire) {
fprintf(stderr, "gsr error: your sound server is not PipeWire. Application audio is only available when running PipeWire audio server\n");
_exit(2);
}
@@ -3140,6 +3158,14 @@ int main(int argc, char **argv) {
return true;
}, &app_audio_names);
}
#else
if(merged_audio_inputs_should_use_amix(requested_audio_inputs))
audio_merge_type = AudioMergeType::AMIX;
if(uses_app_audio) {
fprintf(stderr, "gsr error: application audio can't be recorded because GPU Screen Recorder is built without application audio support (-Dapp_audio option)\n");
_exit(2);
}
#endif
validate_merged_audio_inputs_app_audio(requested_audio_inputs, app_audio_names);
@@ -3245,8 +3271,7 @@ int main(int argc, char **argv) {
const bool force_no_audio_offset = arg_parser.is_livestream || arg_parser.is_output_piped || (file_extension != "mp4" && file_extension != "mkv" && file_extension != "webm");
const double target_fps = 1.0 / (double)arg_parser.fps;
const bool uses_amix = merged_audio_inputs_should_use_amix(requested_audio_inputs);
arg_parser.audio_codec = select_audio_codec_with_fallback(arg_parser.audio_codec, file_extension, uses_amix);
arg_parser.audio_codec = select_audio_codec_with_fallback(arg_parser.audio_codec, file_extension, audio_merge_type == AudioMergeType::AMIX);
gsr_capture *capture = create_capture_impl(arg_parser, &egl, false);
@@ -3403,7 +3428,7 @@ int main(int argc, char **argv) {
std::vector<AVFilterContext*> src_filter_ctx;
AVFilterGraph *graph = nullptr;
AVFilterContext *sink = nullptr;
if(use_amix) {
if(use_amix && audio_merge_type == AudioMergeType::AMIX) {
int err = init_filter_graph(audio_codec_context, &graph, &sink, src_filter_ctx, merged_audio_inputs.audio_inputs.size());
if(err < 0) {
fprintf(stderr, "gsr error: failed to create audio filter\n");
@@ -3420,8 +3445,7 @@ int main(int argc, char **argv) {
const double num_audio_frames_shift = audio_startup_time_seconds / timeout_sec;
std::vector<AudioDeviceData> audio_track_audio_devices;
if(audio_inputs_has_app_audio(merged_audio_inputs.audio_inputs)) {
assert(!use_amix);
if((use_amix && audio_merge_type == AudioMergeType::PIPEWIRE) || audio_inputs_has_app_audio(merged_audio_inputs.audio_inputs)) {
#ifdef GSR_APP_AUDIO
audio_track_audio_devices.push_back(create_application_audio_audio_input(merged_audio_inputs, audio_codec_context, num_channels, num_audio_frames_shift, &pipewire_audio));
#endif
@@ -3636,7 +3660,7 @@ int main(int argc, char **argv) {
}
std::thread amix_thread;
if(uses_amix) {
if(audio_merge_type == AudioMergeType::AMIX) {
amix_thread = std::thread([&]() {
AVFrame *aframe = av_frame_alloc();
while(running) {
@@ -3677,15 +3701,14 @@ int main(int argc, char **argv) {
bool hdr_metadata_set = false;
const bool hdr = video_codec_is_hdr(arg_parser.video_codec);
double damage_timeout_seconds = arg_parser.framerate_mode == GSR_FRAMERATE_MODE_CONTENT ? 0.5 : 0.1;
damage_timeout_seconds = std::max(damage_timeout_seconds, target_fps);
bool use_damage_tracking = false;
gsr_damage damage;
memset(&damage, 0, sizeof(damage));
if(gsr_window_get_display_server(window) == GSR_DISPLAY_SERVER_X11) {
gsr_damage_init(&damage, &egl, arg_parser.record_cursor);
use_damage_tracking = true;
} else if(!capture->is_damaged) {
fprintf(stderr, "gsr warning: \"-fm content\" has no effect on Wayland when recording a monitor. Either record a monitor on X11 or capture with desktop portal instead (-w portal)\n");
}
if(is_monitor_capture)

View File

@@ -116,8 +116,6 @@ static const struct pw_core_events core_events = {
static void on_process_cb(void *user_data) {
gsr_pipewire_video *self = user_data;
struct spa_meta_cursor *cursor = NULL;
//struct spa_meta *video_damage = NULL;
/* Find the most recent buffer */
struct pw_buffer *pw_buf = NULL;
@@ -137,12 +135,11 @@ static void on_process_cb(void *user_data) {
struct spa_buffer *buffer = pw_buf->buffer;
const bool has_buffer = buffer->datas[0].chunk->size != 0;
if(!has_buffer)
goto read_metadata;
pthread_mutex_lock(&self->mutex);
if(buffer->datas[0].type == SPA_DATA_DmaBuf) {
bool buffer_updated = false;
if(has_buffer && buffer->datas[0].type == SPA_DATA_DmaBuf) {
for(size_t i = 0; i < self->dmabuf_num_planes; ++i) {
if(self->dmabuf_data[i].fd > 0) {
close(self->dmabuf_data[i].fd);
@@ -160,9 +157,7 @@ static void on_process_cb(void *user_data) {
self->dmabuf_data[i].stride = buffer->datas[i].chunk->stride;
}
self->damaged = true;
} else {
// TODO:
buffer_updated = true;
}
// TODO: Move down to read_metadata
@@ -201,32 +196,30 @@ static void on_process_cb(void *user_data) {
break;
}
pthread_mutex_unlock(&self->mutex);
const struct spa_meta *video_damage = spa_buffer_find_meta(buffer, SPA_META_VideoDamage);
if(video_damage) {
struct spa_meta_region *meta_region = NULL;
spa_meta_for_each(meta_region, video_damage) {
if(meta_region->region.size.width == 0 || meta_region->region.size.height == 0)
continue;
read_metadata:
self->damaged = true;
break;
}
} else if(buffer_updated) {
self->damaged = true;
}
// video_damage = spa_buffer_find_meta(buffer, SPA_META_VideoDamage);
// if(video_damage) {
// struct spa_meta_region *r = spa_meta_first(video_damage);
// if(spa_meta_check(r, video_damage)) {
// //fprintf(stderr, "damage: %d,%d %ux%u\n", r->region.position.x, r->region.position.y, r->region.size.width, r->region.size.height);
// pthread_mutex_lock(&self->mutex);
// self->damaged = true;
// pthread_mutex_unlock(&self->mutex);
// }
// }
cursor = spa_buffer_find_meta_data(buffer, SPA_META_Cursor, sizeof(*cursor));
const struct spa_meta_cursor *cursor = spa_buffer_find_meta_data(buffer, SPA_META_Cursor, sizeof(*cursor));
self->cursor.valid = cursor && spa_meta_cursor_is_valid(cursor);
if (self->cursor.visible && self->cursor.valid) {
pthread_mutex_lock(&self->mutex);
struct spa_meta_bitmap *bitmap = NULL;
if (cursor->bitmap_offset)
bitmap = SPA_MEMBER(cursor, cursor->bitmap_offset, struct spa_meta_bitmap);
if (bitmap && bitmap->size.width > 0 && bitmap->size.height && is_cursor_format_supported(bitmap->format)) {
// TODO: Maybe check if the cursor is actually visible by checking if there are visible pixels
if (bitmap && bitmap->size.width > 0 && bitmap->size.height > 0 && is_cursor_format_supported(bitmap->format)) {
const uint8_t *bitmap_data = SPA_MEMBER(bitmap, bitmap->offset, uint8_t);
fprintf(stderr, "gsr info: pipewire: cursor bitmap update, size: %dx%d, format: %s\n",
(int)bitmap->size.width, (int)bitmap->size.height, spa_debug_type_find_name(spa_type_video_format, bitmap->format));
@@ -243,15 +236,19 @@ read_metadata:
self->cursor.hotspot_y = cursor->hotspot.y;
self->cursor.width = bitmap->size.width;
self->cursor.height = bitmap->size.height;
self->damaged = true;
}
if(cursor->position.x != self->cursor.x || cursor->position.y != self->cursor.y)
self->damaged = true;
self->cursor.x = cursor->position.x;
self->cursor.y = cursor->position.y;
pthread_mutex_unlock(&self->mutex);
//fprintf(stderr, "gsr info: pipewire: cursor: %d %d %d %d\n", cursor->hotspot.x, cursor->hotspot.y, cursor->position.x, cursor->position.y);
}
pthread_mutex_unlock(&self->mutex);
pw_stream_queue_buffer(self->stream, pw_buf);
}