mirror of
https://repo.dec05eba.com/gpu-screen-recorder
synced 2026-04-04 18:46:37 +09:00
Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2c53638bb0 | ||
|
|
80c5566d40 | ||
|
|
3ac17b99a0 | ||
|
|
2064d109ee | ||
|
|
cedf3ae7d7 | ||
|
|
af941f602b | ||
|
|
c1614e4f30 | ||
|
|
f00dec683e | ||
|
|
11930c355f | ||
|
|
716dc0b736 | ||
|
|
059e3dbbc0 | ||
|
|
990dfc7589 | ||
|
|
2d551e7b1f |
11
README.md
11
README.md
@@ -119,8 +119,8 @@ Streaming works the same way as recording, but the `-o` argument should be path
|
||||
GPU Screen Recorder uses Ffmpeg so GPU Screen Recorder supports all protocols that Ffmpeg supports.\
|
||||
If you want to reduce latency one thing you can do is to use the `-keyint` option, for example `-keyint 0.5`. Lower value means lower latency at the cost of increased bitrate/decreased quality.
|
||||
## Recording while using replay/streaming
|
||||
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording.\
|
||||
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f gpu-screen-recorder`. The name of the video will be displayed in stdout when saving the video.\
|
||||
You can record a regular video while using replay/streaming by launching GPU Screen Recorder with the `-ro` option to specify a directory where to save the recording (for example: `gpu-screen-recorder -w screen -c mp4 -r 60 -o "$HOME/Videos/replays" -ro "$HOME/Videos/recordings"`).\
|
||||
To start/stop (and save) recording use the SIGRTMIN signal, for example `pkill -SIGRTMIN -f gpu-screen-recorder`. The path to the video will be displayed in stdout when saving the video.\
|
||||
This way of recording while using replay/streaming is more efficient than running GPU Screen Recorder multiple times since this way it only records the screen and encodes the video once.
|
||||
## Controlling GPU Screen Recorder remotely
|
||||
To save a video in replay mode, you need to send signal SIGUSR1 to gpu screen recorder. You can do this by running `pkill -SIGUSR1 -f gpu-screen-recorder`.\
|
||||
@@ -242,3 +242,10 @@ This also affects other screen recording software such as obs studio.\
|
||||
Capture a monitor directly instead to workaround this issue until kde plasma devs fix it, or use another wayland compositor that doesn't have this issue.
|
||||
## System notifications get disabled when recording with desktop portal option
|
||||
Some desktop environments such as KDE Plasma turn off notifications when you record the screen with the desktop portal option. You can disable this by going into KDE Plasma settings -> search for notifications and then under "Do Not Disturb mode" untick "During screen sharing".
|
||||
## The recorded video lags or I get dropped frames in the video
|
||||
This is likely not an issue in the recorded video itself, but the video player you use. GPU Screen Recorder doesn't record by dropping frames. Some video players dont play videos with hardware acceleration by default,
|
||||
especially if you record with HEVC/AV1 video codec. In such cases it's recommended to play the video with mpv instead with hardware acceleration enabled (for example: `mpv --vo=gpu --hwdec=auto video.mp4`).
|
||||
Some corporate distros such as Fedora (or some Fedora based distros) also disable hardware accelerated video codecs on AMD/Intel GPUs, so you might need to install mpv (or another video player) with flathub instead, which bypasses this restriction.
|
||||
## My cursor is flickering in the recorded video
|
||||
This is likely an AMD gpu driver issue. It only happens to certain generations of AMD GPUs. On Wayland you can record with the desktop portal option (`-w portal`) to workaround this issue.
|
||||
This issue hasn't been observed on X11 yet, but if you do observe it you can either record a window (`-w $(xdotool selectwindow)`) or change your xorg config to use software cursor instead (Add `Option "SWcursor" "true"` under modesetting "Device" section in your xorg config file).
|
||||
12
TODO
12
TODO
@@ -294,9 +294,6 @@ Disable GL_DEPTH_TEST, GL_CULL_FACE.
|
||||
|
||||
kde plasma portal capture for screenshot doesn't work well because the portal ui is still visible when taking a screenshot because of its animation.
|
||||
|
||||
It's possible for microphone audio to get desynced when recording together with desktop audio, when not recording app audio as well.
|
||||
Test recording desktop audio and microphone audio together (-a "default_output|default_input") for around 30 minutes.
|
||||
|
||||
We can use dri2connect/dri3open to get the /dev/dri/card device. Note that this doesn't work on nvidia x11.
|
||||
|
||||
Add support for QVBR (QP with target bitrate). Maybe use VBR instead, since nvidia doesn't support QVBR and neither does vulkan.
|
||||
@@ -314,10 +311,6 @@ Set top level window argument for portal capture. Same for gpu-screen-recorder-g
|
||||
|
||||
Remove unix domain socket code from kms-client/server and use socketpair directly. To make this possible always execute the kms server permission setup in flatpak, before starting recording (in gpu-screen-recorder-gtk).
|
||||
|
||||
Application audio capture isn't good enough. It creates a sink that for some automatically gets selected as the default output device and it's visible as an output device.
|
||||
Fix some of these issues by setting gsr-app-sink media class to "Stream/Input/Audio" and node.virtual=true.
|
||||
However that causes pulseaudio to be unable to record from gsr-app-sink, and it ends up being stuck in pa_sound_device_handle_reconnect in the loop with pa_mainloop_iterate.
|
||||
|
||||
Add -k best/best_hdr/best_10bit option, to automatically choose the best codec (prefer av1, then hevc and then h264. For webm files choose vp9 and then vp8).
|
||||
|
||||
Check if region capture works properly with fractional scaling on wayland.
|
||||
@@ -356,3 +349,8 @@ Support youtube sso.
|
||||
Remove -fm content (support it but remove it from documentation and output deprecation notice when its used) and use it when using -fm vbr (which is the default option).
|
||||
But first -fm content needs to be support on wayland as well, by checking if there is a difference between frames (checksum the frame content).
|
||||
-fm content also needs to have a minimum fps to prevent live stream from timing out when nothing changes on the screen.
|
||||
|
||||
There is a leak in nvfbc. When a monitor is turned off and then on there will be an x11 display leak inside nvfbc. This seems to be a bug in nvfbc.
|
||||
Right now a mitigation has been added to not try to recreate the nvfbc session if the capture target (monitor) isn't connected (predict if nvfbc session create will fail).
|
||||
One possible reason this happens is because bExternallyManagedContext is set to true.
|
||||
This also means that nvfbc leaks connection when destroying nvfbc, even if the monitor is connected (this is not an issue right now because exit is done, but if gsr was turned into a library it would be).
|
||||
|
||||
@@ -94,18 +94,12 @@ typedef struct {
|
||||
size_t num_requested_links;
|
||||
size_t requested_links_capacity_items;
|
||||
|
||||
struct pw_proxy **virtual_sink_proxies;
|
||||
size_t num_virtual_sink_proxies;
|
||||
size_t virtual_sink_proxies_capacity_items;
|
||||
|
||||
bool running;
|
||||
} gsr_pipewire_audio;
|
||||
|
||||
bool gsr_pipewire_audio_init(gsr_pipewire_audio *self);
|
||||
void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self);
|
||||
|
||||
bool gsr_pipewire_audio_create_virtual_sink(gsr_pipewire_audio *self, const char *name);
|
||||
|
||||
/*
|
||||
This function links audio source outputs from applications that match the name |app_names| to the input
|
||||
that matches the name |stream_name_input|.
|
||||
@@ -140,6 +134,17 @@ bool gsr_pipewire_audio_add_link_from_apps_to_sink(gsr_pipewire_audio *self, con
|
||||
*/
|
||||
bool gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(gsr_pipewire_audio *self, const char **app_names, int num_app_names, const char *sink_name_input);
|
||||
|
||||
/*
|
||||
This function links audio source outputs from devices that match the name |source_names| to the input
|
||||
that matches the name |stream_name_input|.
|
||||
If a device or a new device starts outputting audio after this function is called and the device name matches
|
||||
then it will automatically link the audio sources.
|
||||
|source_names| and |stream_name_input| are case-insensitive matches.
|
||||
|source_names| can include "default_output" or "default_input" to use the default output/input
|
||||
and it will automatically switch when the default output/input is changed in system audio settings.
|
||||
*/
|
||||
bool gsr_pipewire_audio_add_link_from_sources_to_stream(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *stream_name_input);
|
||||
|
||||
/*
|
||||
This function links audio source outputs from devices that match the name |source_names| to the input
|
||||
that matches the name |sink_name_input|.
|
||||
|
||||
@@ -79,8 +79,8 @@ typedef struct {
|
||||
struct spa_video_info format;
|
||||
int server_version_sync;
|
||||
bool negotiated;
|
||||
bool renegotiated;
|
||||
bool damaged;
|
||||
bool has_modifier;
|
||||
|
||||
struct {
|
||||
bool visible;
|
||||
|
||||
@@ -61,12 +61,12 @@ typedef enum {
|
||||
|
||||
/*
|
||||
Get a sound device by name, returning the device into the |device| parameter.
|
||||
|device_name| can be a device name or "default_output" or "default_input".
|
||||
|device_name| can be a device name or "default_output", "default_input" or "" to not connect to any device (used for app audio for example).
|
||||
If the device name is "default_output" or "default_input" then it will automatically switch which
|
||||
device is records from when the default output/input is changed in the system audio settings.
|
||||
Returns 0 on success, or a negative value on failure.
|
||||
*/
|
||||
int sound_device_get_by_name(SoundDevice *device, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format);
|
||||
int sound_device_get_by_name(SoundDevice *device, const char *node_name, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format);
|
||||
|
||||
void sound_device_close(SoundDevice *device);
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.9.2', default_options : ['warning_level=2'])
|
||||
project('gpu-screen-recorder', ['c', 'cpp'], version : '5.10.1', default_options : ['warning_level=2'])
|
||||
|
||||
add_project_arguments('-Wshadow', language : ['c', 'cpp'])
|
||||
if get_option('buildtype') == 'debug'
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[package]
|
||||
name = "gpu-screen-recorder"
|
||||
type = "executable"
|
||||
version = "5.9.2"
|
||||
version = "5.10.1"
|
||||
platforms = ["posix"]
|
||||
|
||||
[config]
|
||||
|
||||
@@ -251,7 +251,7 @@ static void usage_full() {
|
||||
printf(" Run GPU Screen Recorder with the --list-application-audio option to list valid application names. It's possible to use an application name that is not listed in --list-application-audio,\n");
|
||||
printf(" for example when trying to record audio from an application that hasn't started yet.\n");
|
||||
printf("\n");
|
||||
printf(" -q Video quality. Should be either 'medium', 'high', 'very_high' or 'ultra' when using '-bm qp' or '-bm vbr' options, and '-bm qp' is the default option used.\n");
|
||||
printf(" -q Video/image quality. Should be either 'medium', 'high', 'very_high' or 'ultra' when using '-bm qp' or '-bm vbr' options, and '-bm qp' is the default option used.\n");
|
||||
printf(" 'high' is the recommended option when live streaming or when you have a slower harddrive.\n");
|
||||
printf(" When using '-bm cbr' option then this is option is instead used to specify the video bitrate in kbps.\n");
|
||||
printf(" Optional when using '-bm qp' or '-bm vbr' options, set to 'very_high' be default.\n");
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
#include <assert.h>
|
||||
|
||||
#include <X11/Xlib.h>
|
||||
#include <X11/extensions/Xrandr.h>
|
||||
|
||||
typedef struct {
|
||||
gsr_capture_nvfbc_params params;
|
||||
@@ -302,6 +303,35 @@ static int gsr_capture_nvfbc_start(gsr_capture *cap, gsr_capture_metadata *captu
|
||||
return -1;
|
||||
}
|
||||
|
||||
static bool gsr_capture_nvfbc_is_capture_monitor_connected(gsr_capture_nvfbc *self) {
|
||||
Display *dpy = gsr_window_get_display(self->params.egl->window);
|
||||
int num_monitors = 0;
|
||||
XRRMonitorInfo *monitors = XRRGetMonitors(dpy, DefaultRootWindow(dpy), True, &num_monitors);
|
||||
if(!monitors)
|
||||
return false;
|
||||
|
||||
bool capture_monitor_connected = false;
|
||||
if(strcmp(self->params.display_to_capture, "screen") == 0) {
|
||||
capture_monitor_connected = num_monitors > 0;
|
||||
} else {
|
||||
for(int i = 0; i < num_monitors; ++i) {
|
||||
char *monitor_name = XGetAtomName(dpy, monitors[i].name);
|
||||
if(!monitor_name)
|
||||
continue;
|
||||
|
||||
if(strcmp(monitor_name, self->params.display_to_capture) == 0) {
|
||||
capture_monitor_connected = true;
|
||||
XFree(monitor_name);
|
||||
break;
|
||||
}
|
||||
XFree(monitor_name);
|
||||
}
|
||||
}
|
||||
|
||||
XRRFreeMonitors(monitors);
|
||||
return capture_monitor_connected;
|
||||
}
|
||||
|
||||
static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *capture_metadata, gsr_color_conversion *color_conversion) {
|
||||
gsr_capture_nvfbc *self = cap->priv;
|
||||
|
||||
@@ -310,6 +340,13 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
|
||||
const double now = clock_get_monotonic_seconds();
|
||||
if(now - self->nvfbc_dead_start >= nvfbc_recreate_retry_time_seconds) {
|
||||
self->nvfbc_dead_start = now;
|
||||
/*
|
||||
Do not attempt to recreate the nvfbc session if the monitor isn't turned on/connected.
|
||||
This is to predict if the nvfbc session create below will fail since if it fails it leaks an x11 display (a bug in the nvidia driver).
|
||||
*/
|
||||
if(!gsr_capture_nvfbc_is_capture_monitor_connected(self))
|
||||
return 0;
|
||||
|
||||
gsr_capture_nvfbc_destroy_session_and_handle(self);
|
||||
|
||||
if(gsr_capture_nvfbc_setup_handle(self) != 0) {
|
||||
@@ -322,6 +359,7 @@ static int gsr_capture_nvfbc_capture(gsr_capture *cap, gsr_capture_metadata *cap
|
||||
return -1;
|
||||
}
|
||||
|
||||
fprintf(stderr, "gsr info: gsr_capture_nvfbc_capture: recreated nvfbc session after modeset recovery\n");
|
||||
self->nvfbc_needs_recreate = false;
|
||||
} else {
|
||||
return 0;
|
||||
|
||||
84
src/main.cpp
84
src/main.cpp
@@ -1416,8 +1416,12 @@ static const AudioDevice* get_audio_device_by_name(const std::vector<AudioDevice
|
||||
|
||||
static MergedAudioInputs parse_audio_input_arg(const char *str) {
|
||||
MergedAudioInputs result;
|
||||
result.track_name = str;
|
||||
|
||||
split_string(str, '|', [&](const char *sub, size_t size) {
|
||||
if(size == 0)
|
||||
return true;
|
||||
|
||||
AudioInput audio_input;
|
||||
audio_input.name.assign(sub, size);
|
||||
|
||||
@@ -1645,41 +1649,6 @@ static bool get_supported_video_codecs(gsr_egl *egl, gsr_video_codec video_codec
|
||||
return false;
|
||||
}
|
||||
|
||||
static void force_cpu_encoding(args_parser *args_parser) {
|
||||
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
||||
args_parser->video_encoder = GSR_VIDEO_ENCODER_HW_CPU;
|
||||
if(args_parser->bitrate_mode == GSR_BITRATE_MODE_VBR) {
|
||||
fprintf(stderr, "gsr warning: bitrate mode has been forcefully set to qp because software encoding option doesn't support vbr option\n");
|
||||
args_parser->bitrate_mode = GSR_BITRATE_MODE_QP;
|
||||
}
|
||||
}
|
||||
|
||||
static bool get_supported_video_codecs_with_cpu_fallback(gsr_egl *egl, args_parser *args_parser, bool cleanup, gsr_supported_video_codecs *video_codecs) {
|
||||
if(get_supported_video_codecs(egl, args_parser->video_codec, args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU, cleanup, video_codecs)) {
|
||||
if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU || !args_parser->fallback_cpu_encoding)
|
||||
return true;
|
||||
else if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_GPU && video_codecs->h264.supported && (args_parser->video_codec == (gsr_video_codec)GSR_VIDEO_CODEC_AUTO || args_parser->video_codec == GSR_VIDEO_CODEC_H264))
|
||||
return true;
|
||||
}
|
||||
|
||||
if(args_parser->video_encoder == GSR_VIDEO_ENCODER_HW_CPU || !args_parser->fallback_cpu_encoding)
|
||||
return false;
|
||||
|
||||
fprintf(stderr, "gsr warning: gpu encoding is not available on your system, trying cpu encoding instead because -fallback-cpu-encoding is enabled. Install the proper vaapi drivers on your system (if supported) if you experience performance issues\n");
|
||||
|
||||
if(get_supported_video_codecs(egl, GSR_VIDEO_CODEC_H264, true, cleanup, video_codecs)) {
|
||||
if(args_parser->video_codec != (gsr_video_codec)GSR_VIDEO_CODEC_AUTO && args_parser->video_codec != GSR_VIDEO_CODEC_H264) {
|
||||
fprintf(stderr, "gsr warning: cpu encoding is used but video codec isn't set to h264. Forcing video codec to h264\n");
|
||||
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
||||
}
|
||||
|
||||
force_cpu_encoding(args_parser);
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static void xwayland_check_callback(const gsr_monitor *monitor, void *userdata) {
|
||||
bool *xwayland_found = (bool*)userdata;
|
||||
if(monitor->name_len >= 8 && strncmp(monitor->name, "XWAYLAND", 8) == 0)
|
||||
@@ -2769,6 +2738,15 @@ static void print_codec_error(gsr_video_codec video_codec) {
|
||||
" If your GPU doesn't support hardware accelerated video encoding then you can use '-fallback-cpu-encoding yes' option to encode with your cpu instead.\n", video_codec_name, video_codec_name, video_codec_name);
|
||||
}
|
||||
|
||||
static void force_cpu_encoding(args_parser *args_parser) {
|
||||
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
||||
args_parser->video_encoder = GSR_VIDEO_ENCODER_HW_CPU;
|
||||
if(args_parser->bitrate_mode == GSR_BITRATE_MODE_VBR) {
|
||||
fprintf(stderr, "gsr warning: bitrate mode has been forcefully set to qp because software encoding option doesn't support vbr option\n");
|
||||
args_parser->bitrate_mode = GSR_BITRATE_MODE_QP;
|
||||
}
|
||||
}
|
||||
|
||||
static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, bool use_fallback_codec, bool *low_power, gsr_supported_video_codecs *supported_video_codecs) {
|
||||
// TODO: software encoder for hevc, av1, vp8 and vp9
|
||||
*low_power = false;
|
||||
@@ -2806,7 +2784,7 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
|
||||
fprintf(stderr, "gsr warning: selected video codec h264_vulkan is not supported, trying h264 instead\n");
|
||||
args_parser->video_codec = GSR_VIDEO_CODEC_H264;
|
||||
// Need to do a query again because this time it's without vulkan
|
||||
if(!get_supported_video_codecs_with_cpu_fallback(egl, args_parser, true, supported_video_codecs)) {
|
||||
if(!get_supported_video_codecs(egl, args_parser->video_codec, false, true, supported_video_codecs)) {
|
||||
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
|
||||
print_codec_error(args_parser->video_codec);
|
||||
_exit(11);
|
||||
@@ -2817,7 +2795,7 @@ static const AVCodec* pick_video_codec(gsr_egl *egl, args_parser *args_parser, b
|
||||
fprintf(stderr, "gsr warning: selected video codec hevc_vulkan is not supported, trying hevc instead\n");
|
||||
args_parser->video_codec = GSR_VIDEO_CODEC_HEVC;
|
||||
// Need to do a query again because this time it's without vulkan
|
||||
if(!get_supported_video_codecs_with_cpu_fallback(egl, args_parser, true, supported_video_codecs)) {
|
||||
if(!get_supported_video_codecs(egl, args_parser->video_codec, false, true, supported_video_codecs)) {
|
||||
fprintf(stderr, "gsr error: failed to query for supported video codecs\n");
|
||||
print_codec_error(args_parser->video_codec);
|
||||
_exit(11);
|
||||
@@ -2933,7 +2911,7 @@ static std::vector<AudioDeviceData> create_device_audio_inputs(const std::vector
|
||||
audio_device.sound_device.frames = 0;
|
||||
} else {
|
||||
const std::string description = "gsr-" + audio_input.name;
|
||||
if(sound_device_get_by_name(&audio_device.sound_device, audio_input.name.c_str(), description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
|
||||
if(sound_device_get_by_name(&audio_device.sound_device, description.c_str(), audio_input.name.c_str(), description.c_str(), num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
|
||||
fprintf(stderr, "gsr error: failed to get \"%s\" audio device\n", audio_input.name.c_str());
|
||||
_exit(1);
|
||||
}
|
||||
@@ -2958,17 +2936,12 @@ static AudioDeviceData create_application_audio_audio_input(const MergedAudioInp
|
||||
fprintf(stderr, "gsr error: failed to generate random string\n");
|
||||
_exit(1);
|
||||
}
|
||||
|
||||
std::string combined_sink_name = "gsr-combined-";
|
||||
combined_sink_name.append(random_str, sizeof(random_str));
|
||||
|
||||
if(!gsr_pipewire_audio_create_virtual_sink(pipewire_audio, combined_sink_name.c_str())) {
|
||||
fprintf(stderr, "gsr error: failed to create virtual sink for application audio\n");
|
||||
_exit(1);
|
||||
}
|
||||
|
||||
combined_sink_name += ".monitor";
|
||||
|
||||
if(sound_device_get_by_name(&audio_device.sound_device, combined_sink_name.c_str(), "gpu-screen-recorder", num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
|
||||
if(sound_device_get_by_name(&audio_device.sound_device, combined_sink_name.c_str(), "", "gpu-screen-recorder", num_channels, audio_codec_context->frame_size, audio_codec_context_get_audio_format(audio_codec_context)) != 0) {
|
||||
fprintf(stderr, "gsr error: failed to setup audio recording to combined sink\n");
|
||||
_exit(1);
|
||||
}
|
||||
@@ -2989,19 +2962,19 @@ static AudioDeviceData create_application_audio_audio_input(const MergedAudioInp
|
||||
}
|
||||
|
||||
if(!audio_devices_sources.empty()) {
|
||||
if(!gsr_pipewire_audio_add_link_from_sources_to_sink(pipewire_audio, audio_devices_sources.data(), audio_devices_sources.size(), combined_sink_name.c_str())) {
|
||||
if(!gsr_pipewire_audio_add_link_from_sources_to_stream(pipewire_audio, audio_devices_sources.data(), audio_devices_sources.size(), combined_sink_name.c_str())) {
|
||||
fprintf(stderr, "gsr error: failed to add application audio link\n");
|
||||
_exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
if(app_audio_inverted) {
|
||||
if(!gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
|
||||
if(!gsr_pipewire_audio_add_link_from_apps_to_stream_inverted(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
|
||||
fprintf(stderr, "gsr error: failed to add application audio link\n");
|
||||
_exit(1);
|
||||
}
|
||||
} else {
|
||||
if(!gsr_pipewire_audio_add_link_from_apps_to_sink(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
|
||||
if(!gsr_pipewire_audio_add_link_from_apps_to_stream(pipewire_audio, app_names.data(), app_names.size(), combined_sink_name.c_str())) {
|
||||
fprintf(stderr, "gsr error: failed to add application audio link\n");
|
||||
_exit(1);
|
||||
}
|
||||
@@ -3579,7 +3552,7 @@ int main(int argc, char **argv) {
|
||||
while(running) {
|
||||
void *sound_buffer;
|
||||
int sound_buffer_size = -1;
|
||||
//const double time_before_read_seconds = clock_get_monotonic_seconds();
|
||||
const double time_before_read_seconds = clock_get_monotonic_seconds();
|
||||
if(audio_device.sound_device.handle) {
|
||||
// TODO: use this instead of calculating time to read. But this can fluctuate and we dont want to go back in time,
|
||||
// also it's 0.0 for some users???
|
||||
@@ -3589,8 +3562,6 @@ int main(int argc, char **argv) {
|
||||
|
||||
const bool got_audio_data = sound_buffer_size >= 0;
|
||||
//fprintf(stderr, "got audio data: %s\n", got_audio_data ? "yes" : "no");
|
||||
//const double time_after_read_seconds = clock_get_monotonic_seconds();
|
||||
//const double time_to_read_seconds = time_after_read_seconds - time_before_read_seconds;
|
||||
//fprintf(stderr, "time to read: %f, %s, %f\n", time_to_read_seconds, got_audio_data ? "yes" : "no", timeout_sec);
|
||||
const double this_audio_frame_time = clock_get_monotonic_seconds() - paused_time_offset;
|
||||
|
||||
@@ -3659,10 +3630,9 @@ int main(int argc, char **argv) {
|
||||
}
|
||||
}
|
||||
|
||||
if(!audio_device.sound_device.handle)
|
||||
if(!audio_device.sound_device.handle) {
|
||||
av_usleep(timeout_ms * 1000);
|
||||
|
||||
if(got_audio_data) {
|
||||
} else if(got_audio_data) {
|
||||
// TODO: Instead of converting audio, get float audio from alsa. Or does alsa do conversion internally to get this format?
|
||||
if(needs_audio_conversion)
|
||||
swr_convert(swr, &audio_device.frame->data[0], audio_track.codec_context->frame_size, (const uint8_t**)&sound_buffer, audio_track.codec_context->frame_size);
|
||||
@@ -3690,6 +3660,12 @@ int main(int argc, char **argv) {
|
||||
|
||||
audio_device.frame->pts += audio_track.codec_context->frame_size;
|
||||
num_received_frames++;
|
||||
} else {
|
||||
const double time_after_read_seconds = clock_get_monotonic_seconds();
|
||||
const double time_to_read_seconds = time_after_read_seconds - time_before_read_seconds;
|
||||
const double time_to_sleep_until_next_frame = timeout_sec - time_to_read_seconds;
|
||||
if(time_to_sleep_until_next_frame > 0.0)
|
||||
av_usleep(time_to_sleep_until_next_frame * 1000ULL * 1000ULL);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -58,6 +58,29 @@ static bool requested_link_matches_name_case_insensitive(const gsr_pipewire_audi
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool requested_link_matches_name_case_insensitive_any_type(const gsr_pipewire_audio *self, const gsr_pipewire_audio_requested_link *requested_link, const char *name) {
|
||||
for(int i = 0; i < requested_link->num_outputs; ++i) {
|
||||
switch(requested_link->outputs[i].type) {
|
||||
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_STANDARD: {
|
||||
if(strcasecmp(requested_link->outputs[i].name, name) == 0)
|
||||
return true;
|
||||
break;
|
||||
}
|
||||
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_OUTPUT: {
|
||||
if(strcasecmp(self->default_output_device_name, name) == 0)
|
||||
return true;
|
||||
break;
|
||||
}
|
||||
case GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_INPUT: {
|
||||
if(strcasecmp(self->default_input_device_name, name) == 0)
|
||||
return true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool requested_link_has_type(const gsr_pipewire_audio_requested_link *requested_link, gsr_pipewire_audio_requested_type type) {
|
||||
for(int i = 0; i < requested_link->num_outputs; ++i) {
|
||||
if(requested_link->outputs[i].type == type)
|
||||
@@ -168,7 +191,7 @@ static void gsr_pipewire_audio_create_link(gsr_pipewire_audio *self, const gsr_p
|
||||
if(output_node->type != requested_link->output_type)
|
||||
continue;
|
||||
|
||||
const bool requested_link_matches_app = requested_link_matches_name_case_insensitive(requested_link, output_node->name);
|
||||
const bool requested_link_matches_app = requested_link_matches_name_case_insensitive_any_type(self, requested_link, output_node->name);
|
||||
if(requested_link->inverted) {
|
||||
if(requested_link_matches_app)
|
||||
continue;
|
||||
@@ -642,20 +665,6 @@ void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self) {
|
||||
pw_thread_loop_stop(self->thread_loop);
|
||||
}
|
||||
|
||||
for(size_t i = 0; i < self->num_virtual_sink_proxies; ++i) {
|
||||
if(self->virtual_sink_proxies[i]) {
|
||||
pw_proxy_destroy(self->virtual_sink_proxies[i]);
|
||||
self->virtual_sink_proxies[i] = NULL;
|
||||
}
|
||||
}
|
||||
self->num_virtual_sink_proxies = 0;
|
||||
self->virtual_sink_proxies_capacity_items = 0;
|
||||
|
||||
if(self->virtual_sink_proxies) {
|
||||
free(self->virtual_sink_proxies);
|
||||
self->virtual_sink_proxies = NULL;
|
||||
}
|
||||
|
||||
if(self->metadata_proxy) {
|
||||
spa_hook_remove(&self->metadata_listener);
|
||||
spa_hook_remove(&self->metadata_proxy_listener);
|
||||
@@ -733,54 +742,6 @@ void gsr_pipewire_audio_deinit(gsr_pipewire_audio *self) {
|
||||
#endif
|
||||
}
|
||||
|
||||
static struct pw_properties* gsr_pipewire_create_null_audio_sink(const char *name) {
|
||||
char props_str[512];
|
||||
snprintf(props_str, sizeof(props_str),
|
||||
"{ factory.name=support.null-audio-sink node.name=\"%s\" media.class=Audio/Sink object.linger=false audio.position=[FL FR]"
|
||||
" monitor.channel-volumes=true monitor.passthrough=true adjust_time=0 node.description=gsr-app-sink slaves=\"\" priority.driver=1 priority.session=1 }", name);
|
||||
struct pw_properties *props = pw_properties_new_string(props_str);
|
||||
if(!props) {
|
||||
fprintf(stderr, "gsr error: gsr_pipewire_create_null_audio_sink: failed to create virtual sink properties\n");
|
||||
return NULL;
|
||||
}
|
||||
return props;
|
||||
}
|
||||
|
||||
bool gsr_pipewire_audio_create_virtual_sink(gsr_pipewire_audio *self, const char *name) {
|
||||
if(!array_ensure_capacity((void**)&self->virtual_sink_proxies, self->num_virtual_sink_proxies, &self->virtual_sink_proxies_capacity_items, sizeof(struct pw_proxy*)))
|
||||
return false;
|
||||
|
||||
pw_thread_loop_lock(self->thread_loop);
|
||||
|
||||
struct pw_properties *virtual_sink_props = gsr_pipewire_create_null_audio_sink(name);
|
||||
if(!virtual_sink_props) {
|
||||
pw_thread_loop_unlock(self->thread_loop);
|
||||
return false;
|
||||
}
|
||||
|
||||
struct pw_proxy *virtual_sink_proxy = pw_core_create_object(self->core, "adapter", PW_TYPE_INTERFACE_Node, PW_VERSION_NODE, &virtual_sink_props->dict, 0);
|
||||
// TODO:
|
||||
// If these are done then the above needs sizeof(*self) as the last argument
|
||||
//pw_proxy_add_object_listener(virtual_sink_proxy, &pd->object_listener, &node_events, self);
|
||||
//pw_proxy_add_listener(virtual_sink_proxy, &pd->proxy_listener, &proxy_events, self);
|
||||
// TODO: proxy
|
||||
pw_properties_free(virtual_sink_props);
|
||||
if(!virtual_sink_proxy) {
|
||||
fprintf(stderr, "gsr error: gsr_pipewire_audio_create_virtual_sink: failed to create virtual sink\n");
|
||||
pw_thread_loop_unlock(self->thread_loop);
|
||||
return false;
|
||||
}
|
||||
|
||||
self->server_version_sync = pw_core_sync(self->core, PW_ID_CORE, self->server_version_sync);
|
||||
pw_thread_loop_wait(self->thread_loop);
|
||||
pw_thread_loop_unlock(self->thread_loop);
|
||||
|
||||
self->virtual_sink_proxies[self->num_virtual_sink_proxies] = virtual_sink_proxy;
|
||||
++self->num_virtual_sink_proxies;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool string_remove_suffix(char *str, const char *suffix) {
|
||||
int str_len = strlen(str);
|
||||
int suffix_len = strlen(suffix);
|
||||
@@ -834,6 +795,7 @@ static bool gsr_pipewire_audio_add_links_to_output(gsr_pipewire_audio *self, con
|
||||
self->requested_links[self->num_requested_links].inverted = inverted;
|
||||
++self->num_requested_links;
|
||||
gsr_pipewire_audio_create_link(self, &self->requested_links[self->num_requested_links - 1]);
|
||||
// TODO: Remove these?
|
||||
gsr_pipewire_audio_create_link_for_default_devices(self, &self->requested_links[self->num_requested_links - 1], GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_OUTPUT);
|
||||
gsr_pipewire_audio_create_link_for_default_devices(self, &self->requested_links[self->num_requested_links - 1], GSR_PIPEWIRE_AUDIO_REQUESTED_TYPE_DEFAULT_INPUT);
|
||||
pw_thread_loop_unlock(self->thread_loop);
|
||||
@@ -865,6 +827,10 @@ bool gsr_pipewire_audio_add_link_from_apps_to_sink_inverted(gsr_pipewire_audio *
|
||||
return gsr_pipewire_audio_add_links_to_output(self, app_names, num_app_names, sink_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_STREAM_OUTPUT, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_SINK, true);
|
||||
}
|
||||
|
||||
bool gsr_pipewire_audio_add_link_from_sources_to_stream(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *stream_name_input) {
|
||||
return gsr_pipewire_audio_add_links_to_output(self, source_names, num_source_names, stream_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_SINK_OR_SOURCE, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_STREAM, false);
|
||||
}
|
||||
|
||||
bool gsr_pipewire_audio_add_link_from_sources_to_sink(gsr_pipewire_audio *self, const char **source_names, int num_source_names, const char *sink_name_input) {
|
||||
return gsr_pipewire_audio_add_links_to_output(self, source_names, num_source_names, sink_name_input, GSR_PIPEWIRE_AUDIO_NODE_TYPE_SINK_OR_SOURCE, GSR_PIPEWIRE_AUDIO_LINK_INPUT_TYPE_SINK, false);
|
||||
}
|
||||
|
||||
@@ -280,7 +280,8 @@ static void on_param_changed_cb(void *user_data, uint32_t id, const struct spa_p
|
||||
self->format.info.raw.format,
|
||||
spa_debug_type_find_name(spa_type_video_format, self->format.info.raw.format));
|
||||
|
||||
if(has_modifier) {
|
||||
self->has_modifier = has_modifier;
|
||||
if(self->has_modifier) {
|
||||
fprintf(stderr, "gsr info: pipewire: Modifier: 0x%" PRIx64 "\n", self->format.info.raw.modifier);
|
||||
}
|
||||
|
||||
@@ -736,7 +737,6 @@ void gsr_pipewire_video_deinit(gsr_pipewire_video *self) {
|
||||
self->dmabuf_num_planes = 0;
|
||||
|
||||
self->negotiated = false;
|
||||
self->renegotiated = false;
|
||||
|
||||
if(self->mutex_initialized) {
|
||||
pthread_mutex_destroy(&self->mutex);
|
||||
@@ -783,21 +783,20 @@ static EGLImage gsr_pipewire_video_create_egl_image_with_fallback(gsr_pipewire_v
|
||||
}
|
||||
|
||||
EGLImage image = NULL;
|
||||
if(self->no_modifiers_fallback) {
|
||||
if(self->no_modifiers_fallback || !self->has_modifier) {
|
||||
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, false);
|
||||
} else {
|
||||
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, true);
|
||||
if(!image) {
|
||||
if(self->renegotiated || self->format.info.raw.modifier == DRM_FORMAT_MOD_INVALID) {
|
||||
if(self->format.info.raw.modifier == DRM_FORMAT_MOD_INVALID) {
|
||||
fprintf(stderr, "gsr error: gsr_pipewire_video_create_egl_image_with_fallback: failed to create egl image with modifiers, trying without modifiers\n");
|
||||
self->no_modifiers_fallback = true;
|
||||
image = gsr_pipewire_video_create_egl_image(self, fds, offsets, pitches, modifiers, false);
|
||||
} else {
|
||||
fprintf(stderr, "gsr error: gsr_pipewire_video_create_egl_image_with_fallback: failed to create egl image with modifiers, renegotiating with a different modifier\n");
|
||||
self->negotiated = false;
|
||||
self->renegotiated = true;
|
||||
gsr_pipewire_video_remove_modifier(self, self->format.info.raw.modifier);
|
||||
pw_thread_loop_lock(self->thread_loop);
|
||||
gsr_pipewire_video_remove_modifier(self, self->format.info.raw.modifier);
|
||||
pw_loop_signal_event(pw_thread_loop_get_loop(self->thread_loop), self->reneg);
|
||||
pw_thread_loop_unlock(self->thread_loop);
|
||||
}
|
||||
|
||||
@@ -189,7 +189,7 @@ static pa_handle* pa_sound_device_new(const char *server,
|
||||
snprintf(p->stream_name, sizeof(p->stream_name), "%s", stream_name);
|
||||
|
||||
p->reconnect = true;
|
||||
p->reconnect_last_tried_seconds = clock_get_monotonic_seconds() - 1000.0;
|
||||
p->reconnect_last_tried_seconds = clock_get_monotonic_seconds() - (RECONNECT_TRY_TIMEOUT_SECONDS * 1000.0 * 2.0);
|
||||
p->default_output_device_name[0] = '\0';
|
||||
p->default_input_device_name[0] = '\0';
|
||||
p->device_type = DeviceType::STANDARD;
|
||||
@@ -206,10 +206,17 @@ static pa_handle* pa_sound_device_new(const char *server,
|
||||
p->output_length = buffer_size;
|
||||
p->output_index = 0;
|
||||
|
||||
pa_proplist *proplist = pa_proplist_new();
|
||||
pa_proplist_sets(proplist, PA_PROP_MEDIA_ROLE, "production");
|
||||
if(strcmp(device_name, "") == 0) {
|
||||
pa_proplist_sets(proplist, "node.autoconnect", "false");
|
||||
pa_proplist_sets(proplist, "node.dont-reconnect", "true");
|
||||
}
|
||||
|
||||
if (!(p->mainloop = pa_mainloop_new()))
|
||||
goto fail;
|
||||
|
||||
if (!(p->context = pa_context_new(pa_mainloop_get_api(p->mainloop), name)))
|
||||
if (!(p->context = pa_context_new_with_proplist(pa_mainloop_get_api(p->mainloop), name, proplist)))
|
||||
goto fail;
|
||||
|
||||
if (pa_context_connect(p->context, server, PA_CONTEXT_NOFLAGS, NULL) < 0) {
|
||||
@@ -239,12 +246,14 @@ static pa_handle* pa_sound_device_new(const char *server,
|
||||
if(pa)
|
||||
pa_operation_unref(pa);
|
||||
|
||||
pa_proplist_free(proplist);
|
||||
return p;
|
||||
|
||||
fail:
|
||||
if (rerror)
|
||||
*rerror = error;
|
||||
pa_sound_device_free(p);
|
||||
pa_proplist_free(proplist);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
@@ -283,20 +292,6 @@ static bool pa_sound_device_handle_reconnect(pa_handle *p, char *device_name, si
|
||||
return false;
|
||||
}
|
||||
|
||||
for(;;) {
|
||||
pa_stream_state_t state = pa_stream_get_state(p->stream);
|
||||
|
||||
if(state == PA_STREAM_READY)
|
||||
break;
|
||||
|
||||
if(!PA_STREAM_IS_GOOD(state)) {
|
||||
//pa_context_errno(p->context);
|
||||
return false;
|
||||
}
|
||||
|
||||
pa_mainloop_iterate(p->mainloop, 1, NULL);
|
||||
}
|
||||
|
||||
std::lock_guard<std::mutex> lock(p->reconnect_mutex);
|
||||
p->reconnect = false;
|
||||
return true;
|
||||
@@ -317,6 +312,10 @@ static int pa_sound_device_read(pa_handle *p, double timeout_seconds) {
|
||||
if(!pa_sound_device_handle_reconnect(p, device_name, sizeof(device_name), start_time))
|
||||
goto fail;
|
||||
|
||||
pa_mainloop_iterate(p->mainloop, 0, NULL);
|
||||
if(pa_stream_get_state(p->stream) != PA_STREAM_READY)
|
||||
goto fail;
|
||||
|
||||
CHECK_DEAD_GOTO(p, rerror, fail);
|
||||
|
||||
while (p->output_index < p->output_length) {
|
||||
@@ -410,7 +409,7 @@ static int audio_format_to_get_bytes_per_sample(AudioFormat audio_format) {
|
||||
return 2;
|
||||
}
|
||||
|
||||
int sound_device_get_by_name(SoundDevice *device, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format) {
|
||||
int sound_device_get_by_name(SoundDevice *device, const char *node_name, const char *device_name, const char *description, unsigned int num_channels, unsigned int period_frame_size, AudioFormat audio_format) {
|
||||
pa_sample_spec ss;
|
||||
ss.format = audio_format_to_pulse_audio_format(audio_format);
|
||||
ss.rate = 48000;
|
||||
@@ -424,7 +423,7 @@ int sound_device_get_by_name(SoundDevice *device, const char *device_name, const
|
||||
buffer_attr.maxlength = buffer_attr.fragsize;
|
||||
|
||||
int error = 0;
|
||||
pa_handle *handle = pa_sound_device_new(nullptr, description, device_name, description, &ss, &buffer_attr, &error);
|
||||
pa_handle *handle = pa_sound_device_new(nullptr, node_name, device_name, description, &ss, &buffer_attr, &error);
|
||||
if(!handle) {
|
||||
fprintf(stderr, "gsr error: pa_sound_device_new() failed: %s. Audio input device %s might not be valid\n", pa_strerror(error), device_name);
|
||||
return -1;
|
||||
|
||||
Reference in New Issue
Block a user