Instead we can compose software from services and modules (packages—in the Ruby world these are gems) and reason about the interaction between those.
A Facebook developer who works on the React project shared some insights about package boundaries in a conversation about dependency injection. They isolate their tests at package boundaries rather than at a finer level. This matches the reality of code interactions better than pretending a test is always a first-class client of the code under test.
In my experience, DI is useful but has a cost in terms of code complexity. So we should only use it where appropriate. I think that for stuff that actually has multiple implementations (think a plugin system) it should be used (in fact, we use explicit DI in the React core for our pluggable rendering backends.
But with testing I think DI is overkill. Whitebox testing, where you're testing the interface as well as the internals, tends to catch more bugs, but when you change the internals the test tends to change as well. So DI doesn't make much sense here, since part of the point of DI is to isolate parts of the implementation that don't change. So if you're doing it just for testing, you're effectively just over-engineering everything.
Instead, we basically mock at a per-module level (sometimes called "service locator"). So for a given test run you can say that require('moduleA') should actually load moduleA-mock (dig around mock-modules to see what I mean).
This has the advantage of not requiring refactoring of the production code just for testing. I've found that the "refactoring to support DI makes your code better" is a bunch of baloney in practice when you're trying to ship products (at least based on my experience at Facebook, where we DI'd all of our PHP tests starting in late 2011 and are now trying to dig ourselves out of the pit).