TikTok owner ByteDance sacks intern for sabotaging AI project
  • dan dan 25m ago 100%

    interns are there to learn. They’re not supposed to do work that would otherwise be assigned to a paid employee,

    Which industry do you work in? In "big tech", it's very common for interns to work on regular projects that full-time employees would otherwise work on. Usually a senior-ish FTE would determine the best project, write a project plan, scope it, define milestones and deliverables, etc, and the intern would just work on the actual implementation.

    I'm a senior software engineer on my team, and when it's intern season, we usually find things in our backlog that we haven't had time to implement and that would be interesting for an intern to work on, and spec them out.

    5
  • What do I need to watch out for when buying an unlocked phone on the used market?
  • dan dan 38m ago 100%

    Verizon deliberately conflates the term unlocked to mean not locked to a carrier (can use a non-Verizon SIM)

    This is what "unlocked" usually means to the general population though. If you search your favourite search engine for "how to unlock phone", most (if not all) results will be either about carrier locks or about getting into the phone if you forget your PIN/password.

    Someone knowledgeable enough to even know about the bootloader would usually explicitly say "unlocked bootloader" to avoid the ambiguity.

    3
  • AI Seeks Out Racist Language in Property Deeds for Termination
  • dan dan 50m ago 100%

    Did you see something that said it was an LLM?

    Edit: Indeed it's an LLM. They published the model here: https://huggingface.co/reglab-rrc/mistral-rrc

    1
  • Kroger’s plans to roll out facial recognition at its grocery stores is attracting criticism from lawmakers, who warn it could lead to surge pricing and put customers’ personal data at risk
  • dan dan 57m ago 100%

    In the USA, facial recognition isn't legal in some states (e.g. the company needs written permission from the individual to collect their facial data in Illinois), and other stores have had issues with facial recognition (e.g. https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without) so I'm not sure how Kroger think they'll succeed with this.

    5
  • Kroger’s plans to roll out facial recognition at its grocery stores is attracting criticism from lawmakers, who warn it could lead to surge pricing and put customers’ personal data at risk
  • dan dan 1h ago 95%

    “To be clear, Kroger does not and has never engaged in ‘surge pricing,’” the statement said. “Any test of electronic shelf tags is designed to lower prices for more customers where it matters most.”

    Isn't that the same thing? It doesn't matter if you raise prices on demand or lower them, the outcome is the same - different pricing at different times.

    18
  • Syncthing Android app discontinued
  • dan dan 4h ago 100%

    Good idea to send donations to the syncthing-fork devs to keep it alive though.

    5
  • Syncthing Android app discontinued
  • dan dan 4h ago 100%

    In that case, could the syncthing-fork app be renamed to syncthing, now that it'll probably be the main Android app for Syncthing?

    3
  • Syncthing Android app discontinued
  • dan dan 4h ago 100%

    mostly a wrapper around their proprietary library

    I'm not familiar with exactly what Bitwarden are doing, but Nvidia are doing something similar to what you described with their Linux GPU drivers. They launched new open-source drivers (not nouveau) for Turing (GTX 16 and RTX 20 series) and newer GPUs. What they're actually doing is moving more and more functionality out of the drivers into the closed-source firmware, reducing the amount of code they need to open source. Maybe that's okay? I'm not sure how I feel about it.

    2
  • Syncthing Android app discontinued
  • dan dan 4h ago 100%

    Open source software doesn't have a reason to lock you in like proprietary software does :)

    More and more proprietary SaaS systems are allowing data exports now, to comply with laws like the GDPR "right to know". Say what you want about Google and Facebook, but they were the first big companies to start allowing data to be exported before there was any law requiring it - Facebook in 2010 and Google in 2011.

    6
  • Server dealer keeps hitting at Elon Musk for $61 million bill — Wiwynn sues X for unpaid IT infrastructure products
  • dan dan 19h ago 71%

    Forums are social media, especially so for sites like Reddit and Lemmy where the subforums are community-created.

    Wikipedia:

    Social media are interactive technologies that facilitate the creation, sharing and aggregation of content (such as ideas, interests, and other forms of expression) amongst virtual communities and networks.

    Merriam-Webster:

    forms of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)

    Britannica:

    forms of electronic communication (such as Web sites) through which people create online communities to share information, ideas, personal messages, etc.

    3
  • "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearFI
    Firefox 1d ago
    Jump
    The (pre)history of Mozilla’s localization repository infrastructure
  • dan dan 19h ago 100%

    Github has (or used to have?) a feature to import SVN repos into Github, and you could use a Github Git repo via SVN, which is probably why they mentioned Github specifically. Other git hosts didn't have those features.

    1
  • Server dealer keeps hitting at Elon Musk for $61 million bill — Wiwynn sues X for unpaid IT infrastructure products
  • dan dan 1d ago 54%

    How about we read an article before we start spewing shit everywhere?

    Good luck lol. The top comments are almost always people that didn't actually read the article, just the headline. I see it on practically all social media sites, not just Lemmy.

    1
  • I have an Nvidia GPU, should I get an Intel or AMD CPU?
  • dan dan 1d ago 100%

    Doesn't always work, at least on Fedora. On Fedora, it builds the kernel after the package is installed (so you need to wait 5-10 mins before rebooting) and I guess it doesn't work properly sometimes. I've had it happen twice in a few months. It does work properly sometimes though.

    1
  • I have an Nvidia GPU, should I get an Intel or AMD CPU?
  • dan dan 1d ago 100%

    I found that Firefox scrolling was janky even with X11 when using a mouse. You can turn off smooth scrolling in the options, and turn off kinetic scrolling in about:config (apz.gtk.kinetic_scroll.enabled).

    3
  • I have an Nvidia GPU, should I get an Intel or AMD CPU?
  • dan dan 2d ago 100%

    Working fine for me on Fedora 40 with a 6.12 kernel. You need to ensure your desktop environment is modern and supports explicit sync. KDE added support in Plasma 6.1, so Plasma 6.1 and Nvidia driver 560 or above should have no issues. I don't use GNOME but they added support in 46.1 as far as I know.

    One of my favourite underrated things about Wayland is that I could finally disable pasting when clicking the mousewheel. That's so ingrained into XFree86/X11 that it's impossible to disable.
    (disabling it only affects apps that use Wayland)

    2
  • I have an Nvidia GPU, should I get an Intel or AMD CPU?
  • dan dan 2d ago 80%

    On Linux, AMD GPUs work significantly better than Nvidia ones. If you have a choice, choose an AMD. Nvidia is mostly fine though. Even Wayland works well on Nvidia now (after the 560 driver release).

    Sometimes you'll hit issues with memory management if you have <=8GB VRAM, since the Nvidia driver doesn't support swapping infrequently accessed parts of VRAM into regular system RAM, like it does on Windows and like AMD does on both Windows and Linux. It's a long-standing issue.

    You may also need to manually reinstall the driver after kernel updates. In theory, it's improving as Nvidia are moving most of the driver logic into the firmware, and making the driver thinner with the new open-source out-of-tree driver (https://github.com/NVIDIA/open-gpu-kernel-modules).

    For CPU, I'd definitely go with AMD instead of Intel. Intel aren't having such a good time at the moment.

    3
  • Goodbye [System32 Comics]
  • dan dan 2d ago 66%

    The Internet did not have the advertising presence it does now when it was conceived.

    Do you mean back when it was only the government and universities connected to it, before the web existed? Those times were very different. Practically user was contributing to the internet some way, either through time (like actually creating the software to use it, and once the web existed, creating sites) or money.

    These days, there's a significantly larger number of freeloaders that want everything for free, without contributing anything back. So far, advertising has been the only effective model to support such users that don't want to pay.

    3
  • Goodwill is out of control
  • dan dan 2d ago 100%

    Use Vinnie's instead (what we call St Vincent de Paul in Australia)

    Or a local store.

    1
  • 23andMe’s entire board resigned on the same day. Founder Anne Wojcicki still thinks the startup is savable
  • dan dan 2d ago 100%

    Every American who has private insurance right now, could pay that exact same amount instead to the federal government and let it pay our medical bills

    That's called a single-payer healthcare system, and it's a good idea. The government can negotiate pricing for the entire country, rather than having a lot of smaller insurance companies that are all in it to make a profit.

    Australia has a hybrid public/private system where everyone has public health care (so you can see a doctor and get treated even if you don't have any money), but you can choose to get private insurance if you want to. It's a decent idea.

    1
  • YSK: Google is Killing uBlock Origin. No Chromium Browser is Safe.
  • dan dan 3d ago 33%

    Would you rather pay for every site you use? Not every site can afford to have someone else cover the cost for you (which is how Lemmy servers are ran for example), and the only other business models that have worked online are either running ads, or getting users to pay for access.

    -1
  • I noticed that Spectacle has an option to upload to Imgur and Nextcloud. Is there a way to allow it to upload to an SFTP server? Ideally I'd like for it to upload the file via SFTP then put the URL on my clipboard, which is what I do with ShareX on Windows.

    10
    3

    I love Sentry, but it's very heavy. It runs close to 50 Docker containers, some of which use more than 1GB RAM each. I'm running it on a VPS with 10GB RAM and it barely fits on there. They used to say 8GB RAM is required but [bumped it to 16GB RAM](https://github.com/getsentry/self-hosted/pull/2585) after I started using it. It's built for large-scale deployments and has a nice scalable enterprise-ready design using things like Apache Kafka, but I just don't need that since all I'm using it for is tracking bugs in some relatively small C# and JavaScript projects, which may amount to a few hundred events per week if that. I don't use any of the fancier features in Sentry, like the live session recording / replay or the performance analytics. I could move it to one of my 16GB or 24GB RAM systems, but instead I'm looking to evaluate some lighter-weight systems to replace it. What I need is: - Support for C# and JavaScript, including mapping stack traces to original source code using debug symbols for C# and source maps for JavaScript. - Ideally supports React component stack traces in JS. - Automatically group the same bugs together, if multiple people hit the same issue - See how many users are affected by a bug - Ignore particular errors - Mark a bug as "fixed in next release" and reopen it if it's logged again in a new release - Associate bugs with GitHub issues - Ideally supports login via OpenID Connect Any suggestions? Thanks!

    15
    6

    On a small form factor PC with an i5-9500, Debian 12, 6.2.16 kernel, running Proxmox, `powertop` shows the following idle stats: ``` PowerTOP 2.14 Overview Idle stats Frequency stats Device stats Tunables WakeUp Pkg(HW) | Core(HW) | CPU(OS) 0 | | C0 active 2.8% | | POLL 0.0% 0.0 ms | | C1 1.1% 0.4 ms C2 (pc2) 7.2% | | C3 (pc3) 5.5% | C3 (cc3) 0.0% | C3 0.1% 0.1 ms C6 (pc6) 1.5% | C6 (cc6) 1.9% | C6 2.2% 0.6 ms C7 (pc7) 75.2% | C7 (cc7) 92.8% | C7s 0.0% 0.0 ms C8 (pc8) 0.0% | | C8 21.5% 2.5 ms C9 (pc9) 0.0% | | C9 0.0% 0.0 ms C10 (pc10) 0.0% | | | | C10 72.8% 12.5 ms | | C1E 0.4% 0.2 ms | Core(HW) | CPU(OS) 1 | | C0 active 1.4% | | POLL 0.0% 0.0 ms | | C1 0.7% 0.9 ms | | | C3 (cc3) 0.1% | C3 0.1% 0.2 ms | C6 (cc6) 1.0% | C6 1.1% 0.8 ms | C7 (cc7) 96.3% | C7s 0.0% 0.0 ms | | C8 18.9% 2.9 ms | | C9 0.0% 0.0 ms | | | | C10 78.3% 24.8 ms | | C1E 0.0% 0.0 ms ... ``` On a custom-built server with an i5-13500, Asus Pro WS W680M-ACE SE motherboard, Unraid (which uses Slackware), 6.1.38 kernel, it shows the following output: ``` PowerTOP 2.15 Overview Idle stats Frequency stats Device stats Tunables WakeUp Pkg(HW) | Core(HW) | CPU(OS) 0 CPU(OS) 1 | | C0 active 5.9% 0.9% | | POLL 0.1% 0.0 ms 0.0% 0.0 ms | | C1_ACPI 14.2% 0.2 ms 1.0% 0.1 ms C2 (pc2) 0.0% | | C2_ACPI 39.2% 0.8 ms 27.0% 0.9 ms C3 (pc3) 0.0% | C3 (cc3) 0.0% | C3_ACPI 33.6% 1.2 ms 69.7% 3.0 ms C6 (pc6) 0.0% | C6 (cc6) 1.1% | C7 (pc7) 0.0% | C7 (cc7) 0.0% | C8 (pc8) 0.0% | | C9 (pc9) 0.0% | | C10 (pc10) 0.0% | | | Core(HW) | CPU(OS) 2 CPU(OS) 3 | | C0 active 10.4% 0.5% | | POLL 0.0% 0.0 ms 0.0% 0.0 ms | | C1_ACPI 17.4% 0.2 ms 0.4% 0.2 ms | | C2_ACPI 14.3% 0.8 ms 4.9% 0.6 ms | C3 (cc3) 0.0% | C3_ACPI 41.8% 5.4 ms 93.5% 5.5 ms | C6 (cc6) 5.9% | | C7 (cc7) 26.7% | | | | | | | | Core(HW) | CPU(OS) 4 CPU(OS) 5 | | C0 active 11.7% 0.2% | | POLL 0.0% 0.1 ms 0.0% 0.0 ms | | C1_ACPI 19.0% 0.1 ms 0.0% 0.0 ms | | C2_ACPI 11.3% 0.7 ms 0.0% 0.0 ms | C3 (cc3) 0.0% | C3_ACPI 39.6% 7.7 ms 99.6% 7.0 ms | C6 (cc6) 1.3% | | C7 (cc7) 25.4% | ... ``` Both systems have C-states enabled in the BIOS. I have a few questions I'm hoping someone can help with: - Why does the older system show more C-states in the right-most "CPU(OS)" column? - What does it mean when they're suffixed with "_ACPI" like in the output from the new system? - How do I debug the new system not hitting any CPU package C-states? I can't find any documentation about this, neither on the man page nor on Intel's site (the official powertop URL https://01.org/powertop doesn't go anywhere useful any more). Thanks!

    8
    3
    https://upvote.au/post/42206

    Google Analytics is broken on a bunch of my sites thanks to the GA4 migration. Since I have to update everything anyways, I'm looking at the possibility of replacing Google Analytics with something I self-host that's more privacy-focused. I've tried Plausible, Umami and Swetrix (the latter of which I like the most). They're all very lightweight and most are pretty efficient due to their use of a column-oriented database (Clickhouse) for storing the analytics data - makes way more sense than a row-oriented database like MySQL for this use case. However, these systems are all cookie-less. This is *usually* fine, however one of my sites is commonly used in schools on their computers. Cookieless analytics works by tracking sessions based on IP address and user-agent, so in places like schools with one external IP and the same browser on every computer, it just looks like one user in the analytics. I'd like to know the actual number of users. I'm looking for a similarly lightweight analytics system that does use cookies (first-party cookies only) to handle this particular use case. Does anyone know of one? Thanks! Edit: it doesn't have to actually be a cookie - just being able to explicitly specify a session ID instead of inferring one based on IP and user-agent would suffice.

    25
    13

    I'm replacing an SFF PC (HP ProDesk 600 G5 SFF) I'm using as a server with a larger one that'll function as a server and a NAS, and all I want is a case that would have been commonplace 10-15 years ago: - Fits an ATX motherboard. - Fits at least 4-5 hard drives. - Is okay sitting on its side instead of upright (or even better, is built to be horizontal) since it'll be sitting on a wire shelving unit (replacing the SFF PC here: https://upvote.au/post/11946) - No glass side panel, since it'll be sitting horizontally. - Ideally space for a fan on the left panel It seems like cases like this are hard to find these days. The two I see recommended are the Fractal Design Define R5 and the Cooler Master N400, both of which are quite old. The Streacom F12C was really nice but it's long gone now, having been discontinued many years ago. Unfortunately I don't have enough depth for a full-depth rackmount server; I've got a very shallow rack just for networking equipment. Does anyone have recommendations for any cases that fit these requirements? My desktop PC has a Fractal Design Define R4 that I bought close to 10 years ago... I'm tempted to just buy a new case for it and repurpose the Define R4 for the server.

    14
    25

    Sorry for the long post. tl;dr: I've already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device? ________ I've got some storage VPSes "in the cloud": * 10TB disk / 2GB RAM with HostHatch in LA * 100GB NVMe / 16GB RAM with HostHatch in LA * 3.5TB disk / 2GB RAM with Servarica in Canada The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that. The issue I have with the VPSes is that since they're shared servers, there's limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower. I've had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change... Now I've got a house with space for home servers, solar panels so running a server is "free", and 10Gbps symmetric internet thanks to [a local ISP, Sonic](https://www.sonic.com/). Currently, at home I've got one server: A [HP ProDesk SFF PC](https://support.hp.com/us-en/document/c06388056) with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk. So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I'd keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two. Trying to work out the best approach: 1. **Buy a NAS**. Something like a QNAP TS-464 or Synology DS923+. Ideally 10GbE since my network and internet connection are both 10Gbps. 2. **Replace my current server with a bigger one**. I'm happy with my current one; all I really need is something with more hard drive bays. The SFF PC only has a single drive bay, its motherboard only has a single 6Gbps SATA port, and the only PCIe slots are taken by a 10Gbps network adapter and a Google Coral TPU. 3. **Build a NAS PC and use it alongside my current server**. TrueNAS seems interesting now that they have a Linux version (TrueNAS Scale). Unraid looks nice too. Any thoughts? I'm leaning towards option 2 since it'll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.

    29
    27

    I have a 10Gbps internet connection. On a system with a 10Gbps Ethernet card, I can get ~8Gbps down and ~6Gbps up: ![](https://www.speedtest.net/result/c/7e69c527-71c2-4209-83de-c5300e8615f5.png) I'd expect this to easily max out a 2.5Gbps network connection. However, while the upload is maxed (or close to it), I can only ever get ~1.0 to 1.5Gbps down: ![](https://www.speedtest.net/result/c/4d138dab-d0b3-45fa-94a7-865f436f808e.png) Both tests were performed on the same system. The only difference is that the first one uses a TRENDnet 10Gbps PCIe network card (which uses an Aquantia AQC107 chipset) whereas the second one uses the onboard NIC on my motherboard (Intel I225-V chipset). This is consistent across two devices that have 10Gbps ports and two devices that have 2.5Gbps ports. I'm using an AdTran 622v ONT provided by my internet provider, a TP-Link ER8411 router, and a MikroTik CRS312-4C+8XG-RM switch. I'm using CAT6 cabling, except for the connection between the router and the switch which uses an SFP+ DAC cable. I haven't been able to figure it out. The 'slower' speeds are still great, I just don't understand why it can't achieve more than 1.5Gbps down over a 2.5Gbps network connection. Any ideas?

    13
    7

    I couldn't find a "Home Networking" community, so this seemed like the best place to post :) My house has this small closet in the hallway and thought it'd make a perfect place to put networking equipment. I got an electrician to install power outlets in it, ran some CAT6 myself (through the wall, down into the crawlspace, to several rooms), and now I finally have a proper networking setup that isn't just cables running across the floor. The rack is a basic StarTech two-post rack ([https://www.amazon.com/gp/product/B001U14MO8/](https://www.amazon.com/gp/product/B001U14MO8/)) and the shelving unit is an AmazonBasics one that ended up perfectly fitting the space ([https://www.amazon.com/gp/product/B09W2X5Y8F/](https://www.amazon.com/gp/product/B09W2X5Y8F/)). In the rack, from top to bottom (prices in US dollars): * TP-Link ER8411 10Gbps router. My main complaint about it is that the eight 'RJ45' ports are all Gigabit, and there's only two 10Gbps ports (one SFP+ for WAN, and one SFP+ for LAN). It can definitely reach 10Gbps NAT throughput though. $350 * Wiitek SFP+ to RJ45 module for connecting Sonic's ONT (which only has an RJ45 port), and 10Gtek SFP+ DAC cable to connect router to switch. * MikroTik CRS312-4C+8XG-RM managed switch (runs RouterOS). 12 x 10Gbps ports. I bought it online from Europe, so it ended up being \~$520 all-in, including shipping. * Cable Matters 24-port keystone patch panel. * TP-Link TL-SG1218MPE 16-port Gigabit PoE switch. 250 W PoE power budget. Used for security cameras - three cameras installed so far. * Tripp Lite 14 outlet PDU. Other stuff: * AdTran 622v ONT provided by my internet provider (Sonic), mounted to the wall. * HP ProDesk 600 G5 SFF PC with Core i5-9500. Using it for a home server running Home Assistant, Blue Iris, Node-RED, Zigbee2MQTT, and a few other things. Bought it off eBay for $200. * Sonoff Zigbee dongle plugged in to the front USB port * (next to the PC) Raspberry Pi 4B with SATA SSD plugged in to it. Not doing anything at the moment, as I migrated everything to the PC. * (not pictured) Wireless access point is just a basic Netgear one I bought from Costco a few years ago. It's sitting on the top shelf. I'm going to replace it with a TP-Link Omada ceiling-mounted one once their wifi 7 access points have been released. Speed test: [https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc](https://www.speedtest.net/my-result/d/3740ce8b-bba5-486f-9aad-beb187bd1cdc) Edit: Sorry, I don't know why the image is rotated :/ The file looks fine on my computer.

    69
    32
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearLE
    Lemmy Support dan 1y ago 100%
    Can't search for communities in Mastodon

    Hi! I just created a Lemmy server at https://upvote.au/ for my personal use. I created a test community with a test post, but searching for it in Mastodon doesn't work. I tried searching for both `@dan@upvote.au` and `@!dan@upvote.au`. I see the requests in the Nginx log: ``` 172.19.0.5 - - [13/Jun/2023:22:57:06 -0700] "GET /.well-known/webfinger?resource=acct:test@upvote.au HTTP/1.1" 200 312 "-" "http.rb/5.1.1 (Mastodon/4.1.2; +https://toot.d.sb/)" 172.19.0.5 - - [13/Jun/2023:22:57:06 -0700] "GET /c/test HTTP/1.1" 200 10033 "-" "http.rb/5.1.1 (Mastodon/4.1.2; +https://toot.d.sb/)" ``` However, no results appear in Mastodon. Any ideas?

    1
    0
    "Initials" by "Florian Körner", licensed under "CC0 1.0". / Remix of the original. - Created with dicebear.comInitialsFlorian Körnerhttps://github.com/dicebear/dicebearTE
    test dan 1y ago 100%
    test

    test 1

    1
    0