Welcome!
Important information
-- Required MX 15/16 Repository Changes
-- Information on torrent hosting changes
-- Information on MX15/16 GPG Keys
-- Spectre and Meltdown vulnerabilities

News
-- Introducing our new Website
-- MX Linux on social media: here

Current releases
-- MX-18.3 Point Release release info here
-- Migration Information to MX-18 here
-- antiX-17.4.1 release info here

New users
-- Please read this first, and don't forget to add system and hardware information to posts!
-- Here are the Forum Rules

pCloud & File Management

Help for Current Versions of MX
User avatar
spelk
Forum Novice
Forum  Novice
Posts: 7
Joined: Sat Dec 01, 2018 2:28 pm

pCloud & File Management

#1

Post by spelk » Fri Jan 04, 2019 6:49 pm

A problem that has recently cropped up, since the MX-18 update is a major delay when trying to start a file manager (Thunar, SpaceFM) or any file operation (from within Firefox or other program - loading/saving a file).

It can take 3 or 4 minutes for Thunar to load up after clicking the launcher.

If I kill the pCloud application, Thunar loads up instantly.

pCloud sets up a mounted FUSE virtual drive (I think), which I can copy files to, and I use it to store quite a bit of my digital life - so killing pCloud is a big downside to some of my general file operations.

I'm not sure whether it is a change in MX-Linux or the pCloud app has changed

https://www.pcloud.com/how-to-install-p ... lectron-64

The pcloud devs seem to be pursuing an Electron based application these days.. "We are currently focusing on developing pCloud Drive Electron. pCloud Drive 3.1.1 is no longer supported."

The Linux 64-bit pcloud app is downloaded, renamed from pcloud to pcloud.AppImage, set to executable, and then added in the Startup Launcher.

Is there anything I can do config wise to reduce this big delay, but still run my virtual backup drive with pCloud?

Any help or advice on this issue would be very much welcome.

Code: Select all


$ inxi -Fzp
System:    Host: XENO Kernel: 4.19.0-1-amd64 x86_64 bits: 64 Desktop: Xfce 4.12.3 
           Distro: MX-18_x64 Continuum March 14  2018 
Machine:   Type: Desktop System: ASUS product: All Series v: N/A serial: <filter> 
           Mobo: ASUSTeK model: Z97-A v: Rev 1.xx serial: <filter> 
           UEFI [Legacy]: American Megatrends v: 2801 date: 11/11/2015 
CPU:       Topology: Quad Core model: Intel Core i5-4460 bits: 64 type: MCP L2 cache: 6144 KiB 
           Speed: 800 MHz min/max: 800/3400 MHz Core speeds (MHz): 1: 800 2: 800 3: 800 4: 800 
Graphics:  Device-1: NVIDIA GP106 [GeForce GTX 1060 6GB] driver: nvidia v: 390.87 
           Display: x11 server: X.Org 1.19.2 driver: nvidia resolution: 1920x1080~60Hz 
           OpenGL: renderer: GeForce GTX 1060 6GB/PCIe/SSE2 v: 4.6.0 NVIDIA 390.87 
Audio:     Device-1: Intel 9 Series Family HD Audio driver: snd_hda_intel 
           Device-2: NVIDIA driver: snd_hda_intel 
           Sound Server: ALSA v: k4.19.0-1-amd64 
Network:   Device-1: Intel Ethernet I218-V driver: e1000e 
           IF: eth0 state: up speed: 1000 Mbps duplex: full mac: <filter> 
Drives:    Local Storage: total: 2.84 TiB used: 2.73 TiB (96.1%) 
           ID-1: /dev/sda vendor: Kingston model: SV300S37A120G size: 111.79 GiB 
           ID-2: /dev/sdb vendor: Western Digital model: WD1003FZEX-00MK2A0 size: 931.51 GiB 
           ID-3: /dev/sdc vendor: Toshiba model: HDWD120 size: 1.82 TiB 
Partition: ID-1: / size: 106.58 GiB used: 35.90 GiB (33.7%) fs: ext4 dev: /dev/sda1 
           ID-2: /home/<filter>/pCloudDrive size: 2.00 TiB used: 712.35 GiB (34.8%) fs: fuse 
           source: ERR-102 
           ID-3: /media/DATA size: 915.71 GiB used: 540.69 GiB (59.0%) fs: ext4 dev: /dev/sdb1 
           ID-4: /media/HOLD size: 1.79 TiB used: 1.47 TiB (82.1%) fs: ext4 dev: /dev/sdc1 
           ID-5: swap-1 size: 3.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda2 
Sensors:   System Temperatures: cpu: 30.0 C mobo: N/A gpu: nvidia temp: 38 C 
           Fan Speeds (RPM): cpu: 0 gpu: nvidia fan: 46% 
Info:      Processes: 202 Uptime: 21m Memory: 7.74 GiB used: 1.71 GiB (22.1%) Shell: bash 
           inxi: 3.0.29 


truongtfg
Forum Novice
Forum  Novice
Posts: 19
Joined: Sun Jul 15, 2018 9:22 am

Re: pCloud & File Management

#2

Post by truongtfg » Fri Jan 04, 2019 10:27 pm

I also run the latest updated MX Linux and use pCloud 1.4.7, and I have not seen any problem with Thunar, SpaceFM or other file managers. Can you show the result of these commands:

Code: Select all

dmesg -l err,warn,crit,alert,emerg
cat /sys/block/sda/queue/scheduler
Besides, you can try installing another kernel such as liquorix (in the MX repo) or Xanmod (I'm using this, you can download and install it from here: https://xanmod.org/))

My PC spec:

Code: Select all

System:    Host: mxDes Kernel: 4.20.0-xanmod1 x86_64 bits: 64 compiler: gcc v: 8.2.0 
           Desktop: Xfce 4.12.3 Distro: MX-18_x64 Continuum March 14  2018 
           base: Debian GNU/Linux 9 (stretch) 
Machine:   Type: Desktop Mobo: MSI model: B75MA-P45 (MS-7798) v: 1.0 serial: <filter> 
           BIOS: American Megatrends v: 1.9 date: 09/30/2013 
CPU:       Topology: Quad Core model: Intel Core i5-2500 bits: 64 type: MCP arch: Sandy Bridge 
           rev: 7 L2 cache: 6144 KiB 
           flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 26401 
           Speed: 3363 MHz min/max: 1600/3700 MHz Core speeds (MHz): 1: 3155 2: 1909 3: 3066 
           4: 2372 
Graphics:  Device-1: AMD Ellesmere [Radeon RX 470/480] vendor: Hightech Information System 
           driver: amdgpu v: kernel bus ID: 01:00.0 
           Display: x11 server: X.Org 1.19.2 driver: amdgpu 
           resolution: 1920x1080~60Hz, 1366x768~60Hz 
           OpenGL: 
           renderer: AMD Radeon RX 470 Graphics (POLARIS10 DRM 3.27.0 4.20.0-xanmod1 LLVM 7.0.0) 
           v: 4.5 Mesa 18.2.6 direct render: Yes 
Audio:     Device-1: Intel 7 Series/C216 Family High Definition Audio vendor: Micro-Star MSI 
           driver: snd_hda_intel v: kernel bus ID: 00:1b.0 
           Device-2: AMD vendor: Hightech Information System driver: snd_hda_intel v: kernel 
           bus ID: 01:00.1 
           Sound Server: ALSA v: k4.20.0-xanmod1 
Network:   Device-1: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet 
           vendor: Micro-Star MSI driver: r8169 v: kernel port: d000 bus ID: 03:00.0 
           IF: eth0 state: up speed: 1000 Mbps duplex: full mac: <filter> 
Drives:    Local Storage: total: 4.16 TiB used: 2.68 TiB (64.5%) 
           ID-1: /dev/sda vendor: Toshiba model: DT01ACA050 size: 465.76 GiB 
           ID-2: /dev/sdb vendor: Seagate model: ST1000DM010-2EP102 size: 931.51 GiB 
           ID-3: /dev/sdc vendor: Seagate model: ST3160318AS size: 149.05 GiB 
           ID-4: /dev/sdd vendor: Western Digital model: WD1600AAJS-00L7A0 size: 149.05 GiB 
           ID-5: /dev/sde vendor: Hitachi model: HCS5C3225SLA380 size: 232.89 GiB 
           ID-6: /dev/sdf type: USB vendor: Seagate model: Expansion size: 1.82 TiB 
           ID-7: /dev/sdg type: USB vendor: Western Digital model: WD5000LMVW-11VEDS3 
           size: 465.73 GiB 
Partition: ID-1: / size: 62.50 GiB used: 18.33 GiB (29.3%) fs: ext4 dev: /dev/sdd2 
           ID-2: /home size: 75.34 GiB used: 46.61 GiB (61.9%) fs: ext4 dev: /dev/sdd3 
           ID-3: swap-1 size: 8.10 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sdc3 
           ID-4: swap-2 size: 8.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sdd1 
Sensors:   System Temperatures: cpu: 37.0 C mobo: N/A gpu: amdgpu temp: 42 C 
           Fan Speeds (RPM): N/A gpu: amdgpu fan: 1028 
Info:      Processes: 264 Uptime: 1h 10m Memory: 11.66 GiB used: 3.05 GiB (26.2%) Init: SysVinit 
           runlevel: 5 Compilers: gcc: 8.1.0 Shell: bash v: 4.4.12 inxi: 3.0.29 

User avatar
spelk
Forum Novice
Forum  Novice
Posts: 7
Joined: Sat Dec 01, 2018 2:28 pm

Re: pCloud & File Management

#3

Post by spelk » Sat Jan 05, 2019 5:23 am

Thank you for your reply..

Here are the results of the commands:

Code: Select all

$ dmesg -l err,warn,crit,alert,emerg
[    0.195109] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.195110] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[    0.207001] pmd_set_huge: Cannot satisfy [mem 0xe0000000-0xe0200000] with a huge-page mapping due to MTRR override.
[    2.405069] nvidia: loading out-of-tree module taints kernel.
[    2.405075] nvidia: module license 'NVIDIA' taints kernel.
[    2.405075] Disabling lock debugging due to kernel taint
[    2.423424] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  390.87  Tue Aug 21 12:33:05 PDT 2018 (using threaded interrupts)
[    5.784202] resource sanity check: requesting [mem 0x000e0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window]
[    5.784389] caller _nv029980rm+0x57/0x90 [nvidia] mapping multiple BARs
[    6.206455] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[    6.206639] caller _nv001169rm+0xe3/0x1d0 [nvidia] mapping multiple BARs
[    7.105937] systemd-logind[2557]: Failed to start user service, ignoring: Unknown unit: user@115.service
[   65.116496] systemd-logind[2557]: Failed to start user service, ignoring: Unknown unit: user@1000.service
and

Code: Select all

$ cat /sys/block/sda/queue/scheduler
[mq-deadline] none

truongtfg
Forum Novice
Forum  Novice
Posts: 19
Joined: Sun Jul 15, 2018 9:22 am

Re: pCloud & File Management

#4

Post by truongtfg » Sat Jan 05, 2019 9:28 am

Your kernel is using mq-deadline scheduler, which, in my experience, has caused slowness with the file operation (I/O) whenever I use it. It is also weird that bfq is unavailable, is your kernel a custom installed one or is it the default kernel? If possible please install liquorix kernel (which uses bfq by default), or even xanmod, and see if the situation is improved.

User avatar
spelk
Forum Novice
Forum  Novice
Posts: 7
Joined: Sat Dec 01, 2018 2:28 pm

Re: pCloud & File Management

#5

Post by spelk » Fri Jan 11, 2019 4:32 pm

Thanks for your help, I was using the MX 4.15 kernel, and then installed the MX 4.19 one.

I've now installed the Liquorix kernal as you suggested and have re-run the diagnostic commands...

Code: Select all


$ dmesg -l err,warn,crit,alert,emerg
[    0.154819] ENERGY_PERF_BIAS: Set to 'normal', was 'performance'
[    0.154819] ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8)
[    2.689498] ACPI Warning: SystemIO range 0x0000000000001828-0x000000000000182F conflicts with OpRegion 0x0000000000001800-0x000000000000187F (\PMIO) (20180810/utaddress-213)
[    2.691155] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C7F (\_GPE.GPBX) (20180810/utaddress-213)
[    2.691159] ACPI Warning: SystemIO range 0x0000000000001C40-0x0000000000001C4F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20180810/utaddress-213)
[    2.691166] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C7F (\_GPE.GPBX) (20180810/utaddress-213)
[    2.691169] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20180810/utaddress-213)
[    2.691171] ACPI Warning: SystemIO range 0x0000000000001C30-0x0000000000001C3F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20180810/utaddress-213)
[    2.691177] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C7F (\_GPE.GPBX) (20180810/utaddress-213)
[    2.691181] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001C3F (\GPRL) (20180810/utaddress-213)
[    2.691184] ACPI Warning: SystemIO range 0x0000000000001C00-0x0000000000001C2F conflicts with OpRegion 0x0000000000001C00-0x0000000000001FFF (\GPR) (20180810/utaddress-213)
[    2.691187] lpc_ich: Resource conflict(s) found affecting gpio_ich
[    2.789077] nvidia: loading out-of-tree module taints kernel.
[    2.789085] nvidia: module license 'NVIDIA' taints kernel.
[    2.789085] Disabling lock debugging due to kernel taint
[    2.808704] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  390.87  Tue Aug 21 12:33:05 PDT 2018 (using threaded interrupts)
[    6.155887] resource sanity check: requesting [mem 0x000e0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000e0000-0x000e3fff window]
[    6.156060] caller _nv029980rm+0x57/0x90 [nvidia] mapping multiple BARs
[    6.571880] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d0000-0x000d3fff window]
[    6.572060] caller _nv001169rm+0xe3/0x1d0 [nvidia] mapping multiple BARs
[    7.225045] systemd-logind[2589]: Failed to start user service, ignoring: Unknown unit: user@115.service
[   13.517849] systemd-logind[2589]: Failed to start user service, ignoring: Unknown unit: user@1000.service

$ cat /sys/block/sda/queue/scheduler
noop [bfq-sq] 

It looks like BFQ is now running, but there seems to be some conflicts listed that weren't in the previous output.

With the liquorix kernel in place, trying to run the pcloud.AppImage as usual, Thunar and SpaceFM are still showing a real big lag before they start up. But once they get going, they start up rapidly after that.

I'm not quite sure what to make of some of the dmesg output and whether I should just revert back to either MX 4.19 or MX 4.15 kernel or try to troubleshoot the issues here.

Any more help or advice would be appreciated.

User avatar
KBD
Forum Guide
Forum Guide
Posts: 1629
Joined: Sun Jul 03, 2011 7:52 pm

Re: pCloud & File Management

#6

Post by KBD » Fri Jan 11, 2019 7:47 pm

No problem here either, but I'm using the 4.14 MX kernel, not the newest.

truongtfg
Forum Novice
Forum  Novice
Posts: 19
Joined: Sun Jul 15, 2018 9:22 am

Re: pCloud & File Management

#7

Post by truongtfg » Sat Jan 12, 2019 12:07 am

@spelk: The conflict warning in Liquorix is normal, it is also present on my PC, and so far I have yet seen any problem related to such warning.
Back to your case, please post the result of this command (to see how your drives are mounted):

Code: Select all

cat /proc/mounts
And these commands also (to see if your drives are doing ok):

Code: Select all

sudo smartctl -a /dev/sda
sudo smartctl -a /dev/sdb
sudo smartctl -a /dev/sdc
If your machine does not have smartctl, you can install it with this command:

Code: Select all

sudo apt-get install smartmontools
And one more thing you can try is replacing the SATA cables and/or switching sata port. It may sound weird but in the past about 60% of problems related to I/O lags I have faced were solved by doing so.

Edit1: Add command to install smartmontools

User avatar
spelk
Forum Novice
Forum  Novice
Posts: 7
Joined: Sat Dec 01, 2018 2:28 pm

Re: pCloud & File Management

#8

Post by spelk » Sat Jan 12, 2019 6:01 pm

The output of the commands:

Checking the mounts

Code: Select all

$ cat /proc/mounts
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=4034492k,nr_inodes=1008623,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=810888k,mode=755 0 0
/dev/sda1 / ext4 rw,relatime 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
pstore /sys/fs/pstore pstore rw,relatime 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=2250920k 0 0
/dev/sdb1 /media/DATA ext4 rw,relatime 0 0
/dev/sdc1 /media/HOLD ext4 rw,relatime 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
cgroup /sys/fs/cgroup tmpfs rw,relatime,size=12k,mode=755 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd 0 0
tmpfs /run/user/115 tmpfs rw,nosuid,nodev,relatime,size=810888k,mode=700,uid=115,gid=126 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=810888k,mode=700,uid=1000,gid=1000 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
SDA

Code: Select all

$ sudo smartctl -a /dev/sda
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.19.0-13.1-liquorix-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     SandForce Driven SSDs
Device Model:     KINGSTON SV300S37A120G
Serial Number:    50026B724B097403
LU WWN Device Id: 5 0026b7 24b097403
Firmware Version: 60AABBF0
User Capacity:    120,034,123,776 bytes [120 GB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS, ACS-2 T13/2015-D revision 3
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 12 21:57:42 2019 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x02)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(    0) seconds.
Offline data collection
capabilities: 			 (0x7d) SMART execute Offline immediate.
					No Auto Offline data collection support.
					Abort Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 (  48) minutes.
Conveyance self-test routine
recommended polling time: 	 (   2) minutes.
SCT capabilities: 	       (0x0025)	SCT Status supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x0032   120   120   050    Old_age   Always       -       0/0
  5 Retired_Block_Count     0x0033   100   100   003    Pre-fail  Always       -       0
  9 Power_On_Hours_and_Msec 0x0032   087   087   000    Old_age   Always       -       12189h+24m+43.470s
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       2045
171 Program_Fail_Count      0x000a   100   100   000    Old_age   Always       -       0
172 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
174 Unexpect_Power_Loss_Ct  0x0030   000   000   000    Old_age   Offline      -       356
177 Wear_Range_Delta        0x0000   000   000   000    Old_age   Offline      -       98
181 Program_Fail_Count      0x000a   100   100   000    Old_age   Always       -       0
182 Erase_Fail_Count        0x0032   100   100   000    Old_age   Always       -       0
187 Reported_Uncorrect      0x0012   100   100   000    Old_age   Always       -       0
189 Airflow_Temperature_Cel 0x0000   027   037   000    Old_age   Offline      -       27 (Min/Max 12/37)
194 Temperature_Celsius     0x0022   027   037   000    Old_age   Always       -       27 (Min/Max 12/37)
195 ECC_Uncorr_Error_Count  0x001c   120   120   000    Old_age   Offline      -       0/0
196 Reallocated_Event_Count 0x0033   100   100   003    Pre-fail  Always       -       0
201 Unc_Soft_Read_Err_Rate  0x001c   120   120   000    Old_age   Offline      -       0/0
204 Soft_ECC_Correct_Rate   0x001c   120   120   000    Old_age   Offline      -       0/0
230 Life_Curve_Status       0x0013   100   100   000    Pre-fail  Always       -       100
231 SSD_Life_Left           0x0013   100   100   010    Pre-fail  Always       -       0
233 SandForce_Internal      0x0032   000   000   000    Old_age   Always       -       10545
234 SandForce_Internal      0x0032   000   000   000    Old_age   Always       -       8849
241 Lifetime_Writes_GiB     0x0032   000   000   000    Old_age   Always       -       8849
242 Lifetime_Reads_GiB      0x0032   000   000   000    Old_age   Always       -       12693

SMART Error Log not supported

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      9474         -
# 2  Short offline       Completed without error       00%      9474         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
SDB

Code: Select all

$ sudo smartctl -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.19.0-13.1-liquorix-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Black
Device Model:     WDC WD1003FZEX-00MK2A0
Serial Number:    WD-WCC3F4SE0XN1
LU WWN Device Id: 5 0014ee 20b19edbd
Firmware Version: 01.01A01
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 12 21:58:39 2019 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(11580) seconds.
Offline data collection
capabilities: 			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 ( 120) minutes.
Conveyance self-test routine
recommended polling time: 	 (   5) minutes.
SCT capabilities: 	       (0x3035)	SCT Status supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   170   169   021    Pre-fail  Always       -       2500
  4 Start_Stop_Count        0x0032   097   097   000    Old_age   Always       -       3472
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   084   084   000    Old_age   Always       -       12192
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       1860
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       84
193 Load_Cycle_Count        0x0032   199   199   000    Old_age   Always       -       3397
194 Temperature_Celsius     0x0022   111   101   000    Old_age   Always       -       32
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 12 (device log contains only the most recent five errors)
	CR = Command Register [HEX]
	FR = Features Register [HEX]
	SC = Sector Count Register [HEX]
	SN = Sector Number Register [HEX]
	CL = Cylinder Low Register [HEX]
	CH = Cylinder High Register [HEX]
	DH = Device/Head Register [HEX]
	DC = Device Command Register [HEX]
	ER = Error register [HEX]
	ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 12 occurred at disk power-on lifetime: 9477 hours (394 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 e0 4f c2 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d6 01 e0 4f c2 00 00      03:42:57.375  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.374  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.374  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.373  SMART WRITE LOG

Error 11 occurred at disk power-on lifetime: 9477 hours (394 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 e0 4f c2 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d6 01 e0 4f c2 00 00      03:42:57.374  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.374  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.373  SMART WRITE LOG

Error 10 occurred at disk power-on lifetime: 9477 hours (394 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 e0 4f c2 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d6 01 e0 4f c2 00 00      03:42:57.374  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      03:42:57.373  SMART WRITE LOG

Error 9 occurred at disk power-on lifetime: 9477 hours (394 days + 21 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 e0 4f c2 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d6 01 e0 4f c2 00 00      03:42:57.373  SMART WRITE LOG

Error 8 occurred at disk power-on lifetime: 7524 hours (313 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 e0 4f c2 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d6 01 e0 4f c2 00 00      00:36:45.044  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      00:36:45.044  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      00:36:45.044  SMART WRITE LOG
  b0 d6 01 e0 4f c2 00 00      00:36:45.043  SMART WRITE LOG

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%      9477         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SDC

Code: Select all

$ sudo smartctl -a /dev/sdc
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.19.0-13.1-liquorix-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     TOSHIBA HDWD120
Serial Number:    67FU64AGS
LU WWN Device Id: 5 000039 fe5e7a355
Firmware Version: MX4OACF0
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS T13/1699-D revision 4
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Sat Jan 12 21:59:59 2019 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84)	Offline data collection activity
					was suspended by an interrupting command from host.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(14439) seconds.
Offline data collection
capabilities: 			 (0x5b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 ( 241) minutes.
SCT capabilities: 	       (0x003d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   141   141   054    Pre-fail  Offline      -       66
  3 Spin_Up_Time            0x0007   126   126   024    Pre-fail  Always       -       300 (Average 299)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       558
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   124   124   020    Pre-fail  Offline      -       33
  9 Power_On_Hours          0x0012   100   100   000    Old_age   Always       -       2726
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       404
192 Power-Off_Retract_Count 0x0032   098   098   000    Old_age   Always       -       3061
193 Load_Cycle_Count        0x0012   098   098   000    Old_age   Always       -       3061
194 Temperature_Celsius     0x0002   176   176   000    Old_age   Always       -       34 (Min/Max 15/43)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

truongtfg
Forum Novice
Forum  Novice
Posts: 19
Joined: Sun Jul 15, 2018 9:22 am

Re: pCloud & File Management

#9

Post by truongtfg » Mon Jan 14, 2019 10:18 am

I see that smartctl report some errors with your sdb drive (Western Digital Black). Is there any chance that you put the data for syncing on that drive (or the pcloud appimage)? If so you can put data and the appimage file on another drive and see if it is improved. If not I recommend replacing the sata cables and running fsck on your drive. Remember that before running fsck, you should unmount the drive first, and you may need to run it as root.

Code: Select all

fsck /dev/sdb1
By the way, there may be another issue that is really difficult to detect, and it is related to the PSU. I once had a PC that lagged and crashed frequently, running tests on RAM and HDD showed nothing, replacing sata cables had no effect. Only when I replaced the PSU that everything run smoothly again.

And one more thing, if possible considering replacing the drive, because whenever SMART shows any error, it is always not a good sign.

All the best

User avatar
spelk
Forum Novice
Forum  Novice
Posts: 7
Joined: Sat Dec 01, 2018 2:28 pm

Re: pCloud & File Management

#10

Post by spelk » Thu Mar 07, 2019 6:03 am

As an update to this, the issue is still happening.. and it's happening across multiple MXLinux installations on different hardware (desktop, laptop) running different kernels (liquorix 4.19 and 4.19).

With pCloud.AppImage running, Thunar seems to take 30-40 seconds sometimes more to load up. This happens running Thunar from the File Manager panel launcher "exo-open --launch FileManager %u" and also if I specifically add Thunar to the panel "thunar %F".

If I kill the pCloud.AppImage running in the task tray, Thunar will instantly open, without a problem.

pCloud 64-bit AppImage can be downloaded here: https://www.pcloud.com/download-free-on ... orage.html

I'm running the latest version 1.4.8

(these findings are on a different piece of hardware to the initial request for help, but the symptoms are the same - and I'm trying to work out what settings with the FUSE mounting system is causing Thunar to stall for a long time before being able to load up)

I think pCloud uses the FUSE system and Investigating around the topic, findmnt reports these

Code: Select all

$ [b]findmnt[/b]
TARGET                         SOURCE     FSTYPE    OPTIONS
/                              /dev/sda1  ext4      rw,noatime
├─/sys                         sysfs      sysfs     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/pstore             pstore     pstore    rw,relatime
│ ├─/sys/fs/cgroup             cgroup     tmpfs     rw,relatime,size=12k,mode=755
│ │ └─/sys/fs/cgroup/systemd   systemd    cgroup    rw,nosuid,nodev,noexec,relatime,release_agent=/r
│ └─/sys/fs/fuse/connections   fusectl    fusectl   rw,relatime
├─/proc                        proc       proc      rw,nosuid,nodev,noexec,relatime
├─/dev                         udev       devtmpfs  rw,nosuid,relatime,size=2946992k,nr_inodes=73674
│ └─/dev/pts                   devpts     devpts    rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmod
├─/run                         tmpfs      tmpfs     rw,nosuid,noexec,relatime,size=595204k,mode=755
│ ├─/run/lock                  tmpfs      tmpfs     rw,nosuid,nodev,noexec,relatime,size=5120k
│ ├─/run/shm                   tmpfs      tmpfs     rw,nosuid,nodev,noexec,relatime,size=2868120k
│ ├─/run/rpc_pipefs            rpc_pipefs rpc_pipef rw,relatime
│ ├─/run/user/115              tmpfs      tmpfs     rw,nosuid,nodev,relatime,size=595200k,mode=700,u
│ └─/run/user/1000             tmpfs      tmpfs     rw,nosuid,nodev,relatime,size=595200k,mode=700,u
│   └─/run/user/1000/gvfs      gvfsd-fuse fuse.gvfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1
├─/media/hold                  /dev/sda4  ext4      rw,relatime
├─/home                        /dev/sda3  ext4      rw,noatime
│ └─/home/ian/pCloudDrive      pCloud.fs  fuse      rw,nosuid,nodev,relatime,user_id=1000,group_id=1
└─/var/tmp/.mount_pcloudmIkcWa pcloud.AppImage
                                          fuse.pclo ro,nosuid,nodev,relatime,user_id=1000,group_id=1
mtab reporting these mounts..

Code: Select all

$ [b]cat /etc/mtab[/b]
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,nosuid,relatime,size=2946992k,nr_inodes=736748,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=595204k,mode=755 0 0
/dev/sda1 / ext4 rw,noatime 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
pstore /sys/fs/pstore pstore rw,relatime 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=2868120k 0 0
/dev/sda4 /media/hold ext4 rw,relatime 0 0
/dev/sda3 /home ext4 rw,noatime 0 0
rpc_pipefs /run/rpc_pipefs rpc_pipefs rw,relatime 0 0
cgroup /sys/fs/cgroup tmpfs rw,relatime,size=12k,mode=755 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd 0 0
tmpfs /run/user/115 tmpfs rw,nosuid,nodev,relatime,size=595200k,mode=700,uid=115,gid=126 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=595200k,mode=700,uid=1000,gid=1000 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
pcloud.AppImage /var/tmp/.mount_pcloudmIkcWa fuse.pcloud.AppImage ro,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
pCloud.fs /home/ian/pCloudDrive fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
I don't pretend to fully understand what mounts are what, but I can see pcloud.AppImage has a mount and then pCloud.fs has a mount point too.

Using SpaceFM, which seems to open snappily regardless of pCloud now... I can see the devices mounted

One here:
/var/tmp/.mount_pcloudmIkcWa

with its contents:

Code: Select all

$ [b]ls -all[/b]
total 123535
-rwxr-xr-x 1 root root     7336 Feb 11 17:01 AppRun
-rw-rw-r-- 1 root root    26693 Feb 11 17:00 blink_image_resources_200_percent.pak
-rw-rw-r-- 1 root root       15 Feb 11 17:00 content_resources_200_percent.pak
-rw-rw-r-- 1 root root  8709456 Feb 11 17:00 content_shell.pak
-rw-rw-r-- 1 root root 10197040 Feb 11 17:00 icudtl.dat
-rwxr-xr-x 1 root root  2779552 Feb 11 17:00 libffmpeg.so
-rwxr-xr-x 1 root root 19670376 Feb 11 17:00 libnode.so
-rw-rw-r-- 1 root root     1060 Feb 11 17:00 LICENSE.electron.txt
-rw-rw-r-- 1 root root  1816180 Feb 11 17:00 LICENSES.chromium.html
drwxrwxr-x 2 root root        0 Feb 11 17:01 locales
-rw-rw-r-- 1 root root   221973 Feb 11 17:00 natives_blob.bin
-rwxr-xr-x 1 root root 81166616 Feb 11 17:00 pcloud
-rw-rw-r-- 1 root root      287 Feb 11 17:01 pcloud.desktop
lrwxrwxrwx 1 root root       47 Feb 11 17:01 pcloud.png -> usr/share/icons/hicolor/512x512/apps/pcloud.png
-rw-rw-r-- 1 root root   164181 Feb 11 17:00 pdf_viewer_resources.pak
drwxrwxr-x 3 root root        0 Feb 11 17:01 resources
-rw-rw-r-- 1 root root  1532052 Feb 11 17:00 snapshot_blob.bin
-rw-rw-r-- 1 root root   152522 Feb 11 17:00 ui_resources_200_percent.pak
drwxrwxr-x 4 root root        0 Feb 11 17:01 usr
-rw-rw-r-- 1 root root    57761 Feb 11 17:00 views_resources_200_percent.pak

which I'm guessing is the AppImage containers working directory for running the software.

and one here:
/home/ian/pCloudDrive

Which is my fuse mounted virtual drive pointing at the pCloud servers - where my content is.

In SpaceFM there is also a device mounted here:
/run/user/1000/gvfs

But this points to an empty directory as far as I can tell, and I'm not sure what role gvfs-fuse plays in this pantomime.

I'm not entirely sure why SpaceFM now works ok with pCloud but Thunar does not. There must be some issue with reading/evaluating the pCloud mounted cloud drive inside of Thunar as it opens up.

Interestingly, if you run Thunar as sudo, it opens instantly, shows the pCloudDrive as a device, but selecting it shows it as empty. Even though it is actually connected and has content on it. So sudo Thunar isn't connecting to the cloud drive - I'm guessing because the AppImage is running under my user profile.

I'd skip Thunar and just use SpaceFM as a workaround, but a lot of programs use Thunar as their default mechanism for loading/saving files and/or selecting directories.

I'm sure its just an small issue with Thunar not being able to do something on the fuse mounted drive.. but I'm not knowledgeable enough about that mechanism to elucidate the issue yet.

Any further help would be appreciated.

Post Reply

Return to “MX Help”