> [mocking HTTP..] We’re still completely implementation-dependent. If we want to pass a new query string parameter to our service, for example, we’ll also need to add it to the test so that nock will match the request.
Yes, if you change your API you need to update your tests anyway.
---
I haven't played with it or looked at the code, but the only advantage I see is speed of writing tests: hit run, store the "tape" and you don't have to write any specific mocks (which makes it very good imo). Except when you want to test edge cases (specific error messages from the server, time outs during response, errors on the network level, etc).
I wouldn't talk about the speed since it should be the same as a traditional mock.
> Look, no implementation details!
Isn't the HTTP module considered stable? So your API is more likely going to change then node's HTTP implementation (when was the last time this changed?). In which case you need to record new tapes anyway.
> Except when you want to test edge cases (specific error messages from the server, time outs during response, errors on the network level, etc).
Yep, these features are definitely on our roadmap. This is one of the reasons we chose node modules as a tape format: it's just javascript. For example, if you want to add a delay, simply edit your tape and wrap res.end in a setTimeout. Additionally, I want to [make it so tapes format the response according the content-type][1] so that you can easily edit them by hand and have complete control over the response.
> Yes, if you change your API you need to update your tests anyway.
And if you mock your responses you can easily edit it to update it to the latest version of the API.
This "tape" approach leaves you with a bunch of opaque blobs that may or may not be easy to recreate or update, since they require a server to be in a specific state to create them in the first place.
I don't like recording requests/responses because of this. In these cases, I prefer to manually write the test without running the code because I want to make sure the test matches what I expect my code to do, not record what it actually does. I caught an error yesterday in which a recorded response was used as an expectation, so the test passed. The bug was only exposed when I wrote a new test and manually created the request/response objects based on the docs. The test failed and I found the bug (a field was being transposed during transform of the response).
I tried to implement similar thing so I kudo people for completing it. What is actually very useful IMO is being to use this in vulnerability testing. All the automated pentesting tools I know of can only touch the very basic vulnerability in a blackbox way. Imagine you can capture some sample traffic coming into your web server. Now most requests go through very similar routes. Rarely you will find a user hitting page X from page Y especially if page Y is some API people rarely would use or protected. You can also use the sample traffic and learn if there are any potential malicious payload. It's like putting up a real firewall I guess.
`yakbak` is about service virtualization, to help you remove 3-rd party dependencies for your tests, and replace them with recorded responses.
`diffy` is a reverse proxy which multiplex requests to multiple endpoints, compare differences and detect regressions between responses.
`gor` is about interception your production traffic (it is not a proxy, more like a network analyzer), continuously, and replaying it on demand to your test environments (you can modify and filter this traffic using various ways).